Trustworthy AI – TÜVIT tested!
AI technologies are increasingly part of all areas of our everyday lives. This increases the risks posed by error-prone AI systems, data misuse, and cyber attacks.
We support you on your way to trustworthy AI: Our experts accompany you throughout your development process and carry out detailed analyses (e.g. of the chosen model design) and tests (e.g. robustness, bias, explainability) of your AI systems. Subsequently, a certification can attest the technical security features of your AI.
Think one step ahead
AI regulation is coming. With AI testing & certification, you can already optimally prepare yourself for future requirements.
Show responsibility
With AI testing & certification, you fulfill your corporate responsibility and demonstrate the high importance of AI security.
Better safe than sorry
External testing helps to detect and eliminate potential security vulnerabilities at an early stage.
Your benefits at a glance
Proof of the security of your AI:
With AI testing & certification, you objectively demonstrate the security features of your product.
Security & robustness as a competitive advantage
Secure and robust AI systems help you build trust and thus gain a competitive advantage.
Identification of vulnerabilities in your AI systems
We systematically test your AI systems for vulnerabilities before they go to market.
Transparency & fairness as factors in success
Score points with tested AI systems whose decisions are explainable and unbiased.
Focus on your business
You need your experts for your product development - our team supports you with security.
One step ahead of AI regulation
EU-wide regulation of AI is making progress. With AI testing & certification, you can optimally prepare yourself.
Optimized training for your AI
Based on our analysis, you can retrain specifically with additional data sets.
Security & safety by design
With our development support approach, you will be integrate security and safety right from the beginning.
Our services for your AI security
We support companies already during development, and perform analyses for AI systems. Depending on customer requirements, subsequent certification of the technical security features can take place.
(Development-accompanying) pre-checks with regard to problem areas & solution approaches
Scoping & gap analyses for AI testing/certification
Theoretical analyses (development process, model design)
Security and safety analysis with special test software
AI certification is already possible now
Once our testing is complete, we can certify the security features of your product in our Trusted Product scheme. The certificate documents the results of the tests performed. The test report also provides a corresponding, structured classification of these results in the overall context.
The certification process at a glance:
- kick off workshop
- definition of assets & threats
- definition of valid input/output ranges
- derivation certifiable requirements ("Technical Security Requirements")
- detailed review of the model architecture
- derivation of a test plan
- calibration of test tool for the defined tests
- testing of the model using test tool
- spot check validation of the results
- documentation of results
- issuance of evaluation report
- if all requirements are met: issuance of certificate
What are potential problem areas for artificial intelligence?
Artificial intelligence (AI) can be used to solve many data-driven problems more efficiently or faster than traditional software development. Despite all the advantages, however, its use always leads to additional sources of error, which must be considered appropriately as part of a risk analysis.
- Robustness: AI systems that use machine learning usually have a robustness weakness: Even minor changes to the input can lead to incorrect classification.
- Explainability: Another problem area of AI is the lack of explainability or transparency of decisions made by AI systems. In turn, this can have an impact on fairness.
- Bias: Using artificial intelligence does not make bias disappear. This is because machines can also make discriminatory decisions (machine bias).
Main points of AI examination & certification
Security / robustness
(robustness testing)
Is the AI safe and secure against attacks, accidents and errors?
Transparency
(explainability testing)
Are decision-making and the functioning of the AI transparent and comprehensible?
Fairness
(bias testing)
Are the decisions made by the AI fair to all parties?
FAQ – At a glance
Although AI-specific laws, norms and standards are still not here, it is already possible to certify properties of AI systems.
Once our testing is complete, we can certify the security features of your product in our Trusted Product scheme. The certificate documents the results of the tests performed, and the test report provides a corresponding, structured classification of these results in the overall context.
There are various software products for performing security audits of AI. It is not possible to give a general answer as to which is the most suitable for a particular application; hence this will be part of the initial phase of our testing service. TÜVIT always uses a testing tool developed in-house for its tests, but can also make use of external testing tools.
No problem: We can also integrate third-party tools into our audit process, as long as we consider them suitable. Our proprietary TÜVIT testing tool is then used as a reference for calibration and approval of the tests.
AI systems that use machine learning usually have a robustness weakness: Minor changes to the input lead to incorrect classification. This can have far-reaching consequences. If AI is used in camera-based recognition of traffic signs, e.g. rain or snow can cause a stop sign to be recognized as a right-of-way sign.
However, it is important to consider not only natural phenomena (such as the weather, pollution, etc.), but also deliberate manipulation by attackers. A small, specially prepared sticker on the stop sign or a large poster in the background of the sign can be deliberately used by attackers to deceive the AI.
This is where our robustness analysis comes in: We systematically test your system for these vulnerabilities before it goes into circulation. This allows you to toughen up your AI system appropriately during development, thus minimizing malfunctions in later use.
Certification ensures that your customers can trust your AI system.
These AI topics might also interest you:
How do you test AI?
TÜVIT and Fraunhofer AISEC develop a joint approach to AI certification