Threat Modeling: The Invisible Link between Security, Privacy by Design and Responsible AI

January 26, 2026 Data governance, Data security, Data Management

Contributors: Sarib Khan, CIPP/E, CIPM, Counsel, Progress Software and George Ribarski, Senior Principal Product Security Engineer, Progress Software

Happy Data Privacy Week 2026!

In this blog, Progress experts highlight how our commitment to Secure SDLC underpins everything we do. For several years, we’ve assessed our product teams against OWASP SAMM, aiming for high maturity. This enables our engineers to be well-trained and apply best practices that enhance both user experience and security. We believe that for Progress, security and privacy are core to our development culture that not only includes compliance but also a continuous improvement mindset.

What is threat modeling?

Threat modeling is a structured approach to identify, prioritize and mitigate potential threats to a system. Threat models also support the application of Privacy by Design and Security by Design principles.

In other words, threat modeling is a structured way to identify potential risks and threat actors and determine the best way to prevent exploitation. To be applied in an effective and efficient way, threat modeling needs to be completed before the code is written, the product is developed or the AI model is trained or finetuned. The best moment to do threat modeling is in the design phase of product development. Threat modeling is not just a consideration for security teams; it is an important step in product design as well.

Are system failures usually code mistakes or blind spots in the product design?

Often, threats are found in system misconfigurations, inappropriate processes or data flows, insecure or excessive data collection or poor authentication practices. For example, a feature that was developed and released quickly may have passed functional and standard tests. However, without using threat modeling during product design, it could also unnecessarily capture excessive amounts of data because it uses default configurations. The excessive amounts of data can then turn into direct violation of the data minimization principle and pose additional risks to individuals or organizations.

Sometimes, the threats may be hidden in otherwise properly configurated systems but remain undiscovered until something goes wrong. That said, threat modeling is a powerful approach that can be used to catch these blind spots early in development.

How do products and teams benefit from applying threat modeling?

 Without Threat ModelingWith Threat Modeling
1.Reactive fixes after something goes wrongProactive design that controls risks
2.Compliance chaosReady to satisfy the most common regulatory and customer requirements (HIPAA, GDPR, CCPA, DORA, NIS2, Cyber Resilience Act, AI acts, etc.)
3.Erosion of trustBuilt-in safeguards that increase trust

 

How is threat modeling the foundation of Privacy by Design and Security by Design?

Security by Design is a cybersecurity principle where security is built into a system during design and development, rather than added after its creation. For example, by applying threat modeling, you could identify and mitigate data exfiltration paths and authentication flaws early in the development process. These can be triggers for important access control decisions. Separately, you can identify various abuse cases, including API abuse. If the product developed has AI features, then by using threat modeling, you can identify AI model inference endpoints. These are all examples of how threat modeling is a strong tool to support Privacy by Design and Security by Design principles.

Why is threat modeling critical for AI-powered products?

Today’s fast-paced modern world has heightened requirements for tech security and versatility. The use of AI amplifies some pre-existing risks and adds new ones by contributing to increased data volume, data sensitivity and potential data leakages in output. In addition, new AI-specific threats including prompt injection, model inversion and hallucinations have been introduced. Without adapting traditional security reviews to the new reality, the chance of overlooking these risks is high. This is where threat modeling frameworks can step in to help developers and decision makers identify the new risks and apply appropriate mitigation controls to adapt the systems against specific risks associated with the use of AI.

How can software engineers and product managers do threat modeling?

Here are some actionable steps you can take to use threat modeling:

  • Define business goals or use cases
  • Map data flows
  • Identify assets (data, AI models, APIs, etc.)
  • Define threat actors, threat vectors and possible threats
  • Assess the risk by asking yourself the following: What can go wrong? Who could be potentially harmed? How likely and severe can this harm be?
  • Based on the above, determine and prioritize your mitigation controls early in the process and document your threat model.

There are also various tools and frameworks that you can apply or use to develop a holistic and tailored approach to meet the specific needs of your product or organization. Some of the most widely recognized frameworks include:

  • STRIDE - https://en.wikipedia.org/wiki/STRIDE_model
  • MITRE PANOPTIC™ - https://ptmworkshop.gitlab.io/#/panoptic
  • LINDDUN - https://linddun.org, both for Privacy Threat Modeling

How does threat modeling impact the Product Development Lifecycle?

As with any large-scale rollout, implementing threat modeling may initially slow down the process. However, in the long run, it in fact speeds up delivery by preventing rework caused by insecure or otherwise privacy in a non-compliant way. Performing threat modeling during the product design phase increases the probability that the product is developed with fewer, if any, late-stage surprises.

As mentioned earlier, threat modeling helps make a product or service regulatory-ready. This means that from the beginning, you’ll have clearer security and privacy requirements. When AI governance requirements are added to the mix, threat modeling plays an important role in identifying potential risks by allowing you to adopt appropriate mitigation controls and make your AI product or feature safer.

Finally, a proper and well-documented threat modeling exercise can be pivotal in speeding up audits, assessments and approvals, which are all aimed at one goal — to develop and release a safe and trustworthy product or service.

Threat modeling is a reliable approach to embed privacy, security and responsible AI considerations into products and processes in the very early stages of their development. As systems, applications and processes grow more complex, threat modeling helps organizations move faster without compromising security or trust.

Conclusion

Threat modeling is more than a technical exercise; it is a mindset that strengthens every stage of product development. By examining systems early and holistically, teams can uncover hidden risks, design more resilient architectures and ensure that privacy, security and responsible AI principles are embedded from the start. As technology grows more complex and regulatory expectations rise, threat modeling provides a dependable path to building products that users can trust. When organizations make it a consistent part of their design framework, they not only reduce rework and redundancies, but also accelerate innovation with confidence.

Velina Georgieva

Velina is part of the Progress Software Enterprise Legal Services team where her area of practice is focused on data protection and data privacy. She is also a Certified Information Privacy Manager and Information Privacy Professional for Europe, as well as a member of the International Association of Privacy Professionals. 

Read next The Future of Data Privacy and Its Impact on Marketing