Model Driven Security Policy Automation

On this blog, ObjectSecurity co-founder and CEO Ulrich Lang discusses security policy automation and model-driven security. The aim of this blog is to advocate advance the state of the art in this area through exchange of ideas. - -

Wednesday, 20 April 2016

Re-Examining Identity & Access Management (IAM)

(this blog post has also appeared on Tripwire's "State of Security" blog)

There is a lot security professionals disagree on when it comes to Identity & Access Management (IAM). One thing most would agree on is that IAM means many things to many people, and has been shaped more by vendor product boundaries over the years than by overarching architectures, processes, and governance. The basic term “Identity Management” (IdM) can be described very generally as an “administrative area that deals with identifying individuals in a system and controlling access to resources by placing restrictions on them” (source: Wikipedia). Well, turns out for most people IAM is pretty much the same as IdM – essentially an implementation of tools and processes that deal – at a basic level – with a mix of stuff:

  • managing information about user identities (and their metadata, such as roles)
  • managing information about resources/assets (often systems)
  • providing user authentication, sometimes maybe with support for federation and single sign-on
  • some kind of provisioning, for example setting up user accounts for individuals, or configuring a SAML proxy; provisioning often comes with some workflow automation support that also deals with changes and deprovisioning etc.

So why would an organization want IAM? There are a number of requirements that have to be met, especially: access to resources usually has to be restricted; regulatory requirements have to be met; audit/compliance need visibility into who should have access to what, who accessed what etc.; cost needs to be reduced through organizational efficiency, flexibility, and automation.
So far, so good – but really, to meet those requirements, more than the bullet list above needs to be done, and some of the assumptions about what comprises IAM need to be revisited:

Humans and machines: Firstly, in today’s interconnected world of devices, incl. Internet of Things (IoT) and bring your own (mobile) device (BYOD), IAM needs to support both human and machine users. Furthermore, IAM needs to go beyond provisioning (of user accounts etc.) and also focus on the often-neglected topic of restricting information flows between machines and/or users based on access policy.

Advanced access controls: Another complicating factor is that security policy requirements are getting increasingly complex, including “least privilege”, which – when you peel off the prevalent vendor snake oil – really means that access should only be granted for the “minimum necessary”. This is for example mandated by HIPAA. And this could mean that an access decision needs to be contextually figured out each time some user (human or machine) accesses a resource based on many, dynamically changing factors (e.g. role, task, geolocation, time of day etc.). It is important to distinguish statically administered user attributes such as roles from such dynamically changing factors. Many IAM vendors today offer a plethora of confusing and overlapping solutions that are often useful, but mostly misnamed as “dynamic role-based access control”, “ adaptive authentication” etc. What this really means is that additional context is baked into an access decision. The conceptual issue is that there is an underlying assumption that access control is determined based on identities and context related to a user (individual) – which is often necessary but not sufficient: access control may also depend on other information, for example about the accessed resource or other context. Many advanced access control approaches have been devised over the last 10-15 years to support such complexities, such as attribute-based access control (ABAC) – where (in simplistic terms) access is determined based on rules and attributes about requestors, resources, and context; risk-adaptive access control, where access changes based on calculated risk; proximity-based access control, business process based access control, history-based access control, model-driven security  etc. (just to name a few). At ObjectSecurity, we informally call all of those collectively “advanced access control approaches” (as for example implemented by ObjectSecurity OpenPMF). Like it or not, but IAM needs to also support such fine-grained, adaptive (information flow) access control. The access policies that are managed in IAM should be seen as the primary “master data” of the policy – it can either be directly implemented (using the IAM’s enforcement points), or synchronized/exported into other systems for enforcement.
For all the pentesters and whitehat hackers out there it is important that such access control approaches with enforcement points on many or all protected systems adds a lot of protection: maybe not so much on the system that gets hacked (because a software-based enforcement point can then be bypassed), but definitely by making “pivoting” to other systems much harder (because they enforce their own access policies).

Policy-based monitoring/logging: But protect/control is not all – we also need to focus on detect/respond. Unfortunately, as has been shown by the recent “Target hack”, current detect-and-respond approaches often fail because the well-intended tools are usually implemented without sufficient policy configurations. As a result, Security Incident and Event Management (SIEM) products produce way too many incidents for anyone to realistically get through and act upon within a useful timeframe (note that the “Target hack” SIEM detected the incident, but it was drowned out by hundreds of thousands of other incidents). IAM needs to tie into monitoring and log aggregation to solve some of these issues: IAM’s incidents are usually more like “attempted policy violations”, for example if access to some information on some system was requested but denied in the specific context of the request. Knowing how such “white list” incidents relate to the policy makes it much easier for analysts to figure out criticality. Note that collecting other incidents the traditional way is also still necessary, but a lot of incidents can be discarded automatically thanks to the tie-in with the access policy.

Access federation and authentication federation: Federated identity management is a huge misnomer in IdM, and even federated access management usually is: What most products actually provide is “federated authentication management” based on cryptographic tokens that convey an authentication result. Turns out that this is different from actual “access federation”, sometimes referred to as “AuthoriZation Based Access Control” (ZBAC), and implemented – if done correctly – by using standards such as OAuth3. The idea is related but different from authentication federation in that the tokens provided by the access federation service are not tied to an authentication, but rather include permissions (authorizations) for the token holder. In the real-world the distinction would be (somewhat along the following lines):

  • Authentication token: the use of a notary who certifies that a document is authentic (or a signature to have been made by the person claimed to be the signatory). The content of the certified document can be trusted as long as the notary is trusted.
  • Access/Authorization token: A car unlocks with a car key. The car key is an authorization token, granting access to the car. Someone gives you a car key. The car does not care about who holds the car key as long as they hold the correct car key. 

The assumption in both cases is that, as long as the producer of the token is trusted, the information conveyed in the token is also trusted.

The “soft” but almighty part of IAM: Probably the most important aspects of IAM in any organization is not directly related to technology: the need for a clean IAM architecture, good IAM processes, and strong IAM governance (but the selected technology needs to support these). The idea is that the various parts of IAM (often many products) need to be brought together in a way that facilitates the central IAM management of “one truth of the data” – be it identities, assets, policies, logs, and workflows. Decentralized management should be avoided (this is where process and governance come into play!), and central IAM services should be used where possible (and some synchronization may still be needed for legacy systems).

In conclusion: Getting IAM done for real requires that people are involved who deeply understand IAM – and esp. understand the need for the various capabilities I’ve described in this article: support for humans and machines, support for advanced access controls, support for policy-based monitoring/logging, and support for both access federation and authentication federation. And of course the “soft” but almighty part of IAM - strong architecture, processes, governance.

About the Author: Dr. Ulrich Lang is Founder & CEO of ObjectSecurity, a security policy automation company. He is a renowned access control and Identity & Access Management (IAM) expert with over 20 years in InfoSec (startup, large bank, academic, inventor, technical expert witness, conference program committee, proposal evaluator/reviewer etc.). Over 150 publications/presentations InfoSec book author. PhD on access control from Cambridge University (2003), a master’s in InfoSec (Royal Holloway). Co-founder, co-inventor and CEO of ObjectSecurity (Gartner “Cool Vendor 2008”), an innovative InfoSec company that focuses on making security policies more manageable. He is on the Board of Directors of the Cloud Security Alliance (Silicon Valley). 

Thursday, 25 February 2016

Access Control in 2016 – What you Need to Know

Access control is one of those topics that often means different things to different people. In its most basic form, it is simply the “restriction of access to a resource.” Unfortunately, as you drill down into what that actually means for your organization, things usually get muddy.
For some people, it is simply selectively granting user access to accounts based on the authenticated user identity; for others, based on user roles; yet for others, based on clearances. For some, it’s about locking down a network based on VLANs – so it’s not about users, but rather about machine-to-machine interactions.
And, there are also many access control techniques that are not concerned with controlling access based on what’s allowed (white-listing), but rather what’s not allowed (black-listing) like web application security tools that filter potential traffic.
If you add it all up across a typical IT organization, access control is practically everywhere, and it’s very different in many places.
Enter “access policy” – most access control approaches rely on a policy to be specified by security professionals. Especially for white-listing approaches, this policy is usually organization-specific.
Black-listing is often easier because unwanted access (e.g. malware) is often unwanted for every user of the access control technology. So, while black-listing forms a great security baseline by keeping some unwanted access out, real access control is usually only achieved with additional white-listing based on the particular security requirements the organization has.
This is where things get difficult. On the one extreme, the access control approach is simple, well-known and manageable, such as:
  • Identity-based access control (IBAC) – the requester authenticates and then gets all-or-nothing access
  • Role-based access control (RBAC) – the requester authenticates and provides a role, and gets access based on the role
  • Multi-level security (MLS) – the requester has a clearance level and only gets into resources that have no higher classification level than the requester’s clearance, etc.
The problem with those approaches is that they are almost always too simplistic to actually enforce the policy that matters to the organization. For example, HIPAArequires that a covered entity make reasonable efforts to limit itself to “the minimum necessary to accomplish the intended purpose of the use, disclosure, or request.”
Such generic (let’s call them “high-level”) policies are human-intuitive, but really not readily implementable using traditional, simple access control approaches like IBAC, RBAC, MLS (or any black-listing).
Instead, they will need to be re-interpreted into something more “low-level” (and complex) that can actually be technically implemented, such as “nurses should only get access to patient records of patients who are registered with the treating physician the nurse is currently working for, and only if the nurse and the patient are in the same building.” Such access policies are often very complex, detailed, dynamic, and contextual.
Many advanced access control approaches have been devised over the last 10-15 years to support such complexities. These include: attribute-based access control (ABAC) – where (in simplistic terms) access is determined based on rules and attributes about requesters, resources and context; risk-adaptive access control, where access changes based on calculated risk; proximity-based access control, business process based access control, history-based access control, etc.
Bridging that “semantic gap” between those human-intuitive “high-level” policies on the one hand, and the technically implementable “low-level” policies usually get challenging. Correctly implementing such (and other) advanced access control policies requires a very good understanding of:
  • Today’s increasingly complex security policy requirements and how they impact technical access control implementation
  • The impact of more and more complex IT environments, such as cloud, IoT etc. on access policy
  • The available advanced access control approaches with their benefits and (complexity) challenges
  • Approaches and processes to manage advanced access policies despite the complexity and dynamicity (for example using this blog's topic model-driven security)
  • Understanding of which advanced access controls are most suitable for which use case (e.g. enterprise, big data, cloud, IoT)
In the pursuit of educating practitioners in the access control policy implementation space, I’ll be giving an introduction into what it takes to implement and manage advanced access controls at BSidesSF.
In this quite technical session attendees will learn: why access control policy implementation in 2016 is more complex than you may think, why traditional access control mechanisms are often insufficient, which new approaches are available, and are suitable for what IT/business environment.

Ulrich LangAbout the Author: Dr. Ulrich Lang is Founder & CEO of ObjectSecurity, a security policy automation company. He is a renowned access control expert with over 20 years in InfoSec (startup, large bank, academic, inventor, technical expert witness, conference program committee, proposal evaluator/reviewer etc.). Over 150 publications/presentations InfoSec book author. PhD on access control from Cambridge University (2003), a master’s in InfoSec (Royal Holloway). Co-founder, co-inventor and CEO of ObjectSecurity (Gartner “Cool Vendor 2008”), an innovative InfoSec company that focuses on making security policies more manageable. He is on the Board of Directors of the Cloud Security Alliance (Silicon Valley).

Wednesday, 10 June 2015

Cybersecurity in 2015: Manage Impact!

Today I want to share my thoughts about the recurring discussions we all as security professionals often have about where to focus our efforts. One camp says "continuous monitoring is the new prevention", indicating that they have already failed to protect their information resources, and thus the main focus should be on detecting and remediating attacks,  rather than on actual prevention. Another camp ("moat & castle") says that we need to build high-assurance system that provably prevent certain attacks. Others again say that none of this works, and we need to focus on "compliance-driven security", a nice term for paper-shuffling to meet some "best" practices (irrespective of whether it provides actual protection) in order to "save their a**" (that is, their personal a**). Yet again others say that none of any of this works, and that we should just pay cyber insurance. Yet others just buy random, expensive big-vendor products to "save their a** ("nobody gets fired for buying " etc.) There are many more views, illustrating that the industry today really has little to offer in terms of real protection. One of the main problems is that nobody really has reliable, comprehensive risk metrics for most/all attack vectors (i.e. nobody knows relibaly how big which risks really are), and no comprehensive mitigation metrics for most/all security products and measures (i.e. nobody knows reliably how much a given product/measure actually reduces which risks exactly) . So the industry is really selling a lot of "maybe's" to maybe solve a lot of "maybe" problems - the only reliable number is the cost of security. The ongoing major hacks we read about illustrate that things are pretty ineffective overall.

I want to offer an alternative middle-ground viewpoint today that I feel would often help: We should focus on impact control, rather than just on either/or detecting/mitigating or protecting (we should do these too). In other words, why not assume that your defenses will be hacked, and then figuring out ways to ensure that the impact of compromises is limited. For example, if credentials get stolen (the recent US IRS hack is one of those examples), then the hacker acts as an authorized individual. Therefore, impact should be controlled for any activities of insiders and outsiders (irrespective of whether they are known to be malicious or benign).

The fine-grained access control and model-driven security discussed on this blog for years have really always been about exactly that: Minimize access to information resources to users based on fine-grained, contextual access policies, so that the impact of both accidental and malicious compromise remains limited. In the example of the abovementioned IRS hack, why did the stolen credential need access to so many records? In the Wikileaks case, why did Manning need access to so much information. 

In other words, impact can be controlled by implementing reliable (true!) least privilege access control, so that only the minimum necessary information can be accessed. And I am not talking about least privilege in the "poor man's solution" of privileged account management. I am talking about fine-grained, contextual access policies. This will usually require complex access control systems such as Attribute-Based Access Control (ABAC), which comes with the cost of being unwieldy and complex. Model-Driven Security (MDS), as discussed on this blog for years, helps make ABAC manageable, even in dynamically changing IT landscapes (e.g. SOA, M2M, IIoT etc.). (by the way, our OpenPMF product helps implement ABAC with MDS).

Please spread the word that we need to also control impact, rather than just (1) putting up defenses, and once compromised, much/everything can be stolen; (2) monitoring for compromises and hopefully mitigating before it is too late. (1)+(2) alone are clearly failing.

Wednesday, 22 October 2014

Implementing Proximity-Based Access Control (PBAC) using Model-Driven Security

It's been a while, because we have been BUSY (and also needed to protect some intellectual property). But finally here is a post about about a particularly advanced and highly useful access control approach we have designed and implemented using Model-Driven Security (MDS):

Our Proximity-Based Access Control (PBAC) approach is a highly innovative access control method where information provided to a subject is determined need-to-know based on proximity attributes. It goes far beyond traditional devices access based on physical proximity:

Definition: Proximity-Based Access Control (PBAC) is access control using policies that are based on the relative proximity/distance (calculated by a distance calculation function) between one or more proximity attributes associated with an accessor and one or more proximity attributes associated with an accessed resource.  PBAC is not just about physical proximity, but can involve many proximity dimensions: Geo-Location/Geospatial; Organizational; Operational; Temporal; Business Process; Security; Risk; Social; Information etc. 

Rich, dynamic, contextual, and generic policies can be expressed and enforced if attribute and calculation services can be made available to the PBAC system: For example, geospatial proximity may not be calculated based on the physical location of the requesting user and the requested resource, but for example based on the geospatial area the user’s assigned task pertains to, and the geospatial area the requested information resource pertains to: “Team leaders can access all resources which pertain to a geospatial area that overlaps at least 70% with the geospatial area associated with the requestor’s assigned task”. Or “crime analysts working on a task pertaining to a criminal can access all resources pertaining to criminals known to be within 2 hops proximity on the criminal social graph”.

PBAC needs Model-Driven Security (MDS) because of PBAC’s complex policy implementation details – most conventional access control mechanisms do not support the features required to implement PBAC. Or it would be too cumbersome and error-prone to manually implement/maintain generic PBAC policies using conventional access control mechanisms.

ObjectSecurity® OpenPMF is by far the most advanced MDS product in the market. MDS generates technical security policy rules and accreditation evidence from models, using model-driven approaches.  MDS is “the tool supported process of modelling security requirements at a high level of abstraction, and using other information sources available about the system (produced by other stakeholders). These inputs, which are expressed in Domain Specific Languages (DSL), are then transformed into enforceable security rules with as little human intervention as possible. It also includes the run-time security management (e.g. entitlements / authorizations), i.e. run-time enforcement of the policy on the protected IT systems, dynamic policy updates and the monitoring of policy violations.” (source: Wikipedia, and this blog)

Note that PBAC is an extension of MDS and Attribute-Based Access Control (ABAC). PBAC differs technically from non-PBAC Attribute-Based Access Control (ABAC) systems in that a relative distance calculation function exists between attributes associated with the requesting subject, the action and/or the requested resource. OpenPMF MDS generates machine-enforceable access and logging rules that are enforced by OpenPMF’s ABAC runtime infrastructure, consisting of Policy Access Points, Policy Decision Points, Policy Enforcement Points, Attribute Source Services, Calculation Services, and Mapper Services etc. OpenPMF typically deploys a PDP/PEP/PIP combination on each protected node for robustness and security reasons.

Please contact us here if you would like to learn more about PBAC and how to implement it using MDS.

Tuesday, 22 July 2014

"Internet of Things" and Model-Driven Security - a perfect match!

In this unusually lengthy blog post I discuss how model-driven security is (and has always been) a perfect match for the Internet of Things. We are just wrapping up an embedded systems security consulting project, and are working hard in another large project to bundle our OpenPMF model-driven security policy automation product for easy adoption for IoT. OpenPMF has also supported industrial IoT middleware platforms such as DDS for many years. Furthermore we are working on some privacy by design opportunities with various partners. In this post you will see how all this comes together beautifully :) If you already know IoT and IoT security well, feel free to skip the next two paragraphs.

Internet of Things (IoT): The IT industry is currently at a significant pivotal point: The miniaturization and commoditization of hardware (with sensors and actuators), together with “always-on” network connectivity, and software everywhere. A current buzzword for this trend is the "Internet of Things" (IoT), the network of physical objects that contain embedded technology to communicate and sense or interact with their internal states or the external environment.”. IoT is about embedded interconnected systems and applications, with mostly "machine-to-machine" (M2M) interactions (the traditional internet has many more user-to-machine interactions). Excluding PCs, tablets and smartphones, IoT is forecasted to grow to 26 billion units installed in 2020 (representing almost 30x from 0.9 billion in 2009,  IoT product & service suppliers incremental revenue exceeding $300 billion, resulting in $1.9 trillion in global economic value-add through sales into diverse end markets). IoT blurs the line between traditional electronic devices and software-driven computers. IoT includes many safety-critical and mission-critical industries, such as smart grid, smart cities, smart homes/home automation, security systems, “wearables” (a very fast-growing 78.4% CAGR market segment that includes e.g. smart watches, health monitors etc.), just to name a few. Analysts predict that much of the IoT will be built from software running on standardized, network-connected hardware platforms (not too different from the PC world of the 80s/90s). Costs for such embedded hardware devices is so low that standard hardware (essentially a very small, fully functional computer) is often more cost-efficient and faster-to-market than purpose-built hardware. As a consequence, there is now software in consumer electronics such as internet-enabled power plug adapters, internet-enabled light bulbs (e.g. Philips Hue), internet-enabled sprinkler systems (e.g. Cyber Rain Smart Irrigation Controllers), internet-enabled home security systems (e.g. ADT Pulse), just to name a few. More traditional applications for embedded systems (e.g. cars) also see an increasing focus on software to drive functionality. 

IoT Security: IoT security is currently mostly poor, remains a significant technical challenge and a significant, mostly untapped market opportunity. To showcase just one example of poor/failed IoT security, authorities in several U.S. states are reporting that a hacker has once again broken into and defaced electronic road signs over highways in several U.S. states. June 2014, news media in North Carolina reported that at least three highway signs there had apparently been compromised and re-worded to read "Hack by Sun Hacker." Similar incidents were reported between May 27 and June 2, 2014 in two other states, which spotted variations on that message left by the perpetrator, (including an invitation to chat with him on Twitter). “A variety of network-controlled home automation devices lack basic security controls, making it possible for attackers to access their sensitive functions, often from the Internet. Some of these devices are used to control door locks, surveillance cameras, alarm systems, lights and other sensitive systems. Home automation systems are often connected to security devices, so they are part of the overall security of a home. Because of this, they should have security controls built into them. Companies that manufacture these systems are trying to get their products to market as fast as possible, and they often overlook security testing because it impedes that process” (source). An analyst states that  “engineering staff at embedded device OEMs rate security as the single most common obstacle to their organizations developing connected products”, “two-thirds of embedded engineers say that security is very important or extremely important to their customers”, “fewer than half of engineering firms conduct penetration testing of their embedded devices”, “security related software and hardware represent 5% or less of Bill of Materials costs for most embedded engineering projects, but that proportion is expected to nearly double over the next three years”, and “although security requirements add to development time and costs, OEMs are successfully able to raise prices in response”. A good paper on the lack of cyber security (and what needs to be done to fix it) for power grids has been written by RAD. Another good paper about the implication of the evident lack of security in enterprise-connected mobile and embedded devices is here. “Due to the low cost of adding IoT capability to consumer products, Gartner expects that "ghost" devices with unused connectivity will be common. This will be a combination of products that have the capability built in but require software to "activate" it and products with IoT functionality that customers do not actively leverage. In addition, enterprises will make extensive use of IoT technology, and there will be a wide range of products sold into various markets, such as advanced medical devices; factory automation sensors and applications in industrial robotics; sensor motes for increased agricultural yield; and automotive sensors and infrastructure integrity monitoring systems for diverse areas, such as road and railway transportation, water distribution and electrical transmission.  "By 2020, component costs will have come down to the point that connectivity will become a standard feature, even for processors costing less than $1. This opens up the possibility of connecting just about anything, from the very simple to the very complex, to offer remote control, monitoring and sensing," said Mr. Middleton. "The fact is, that today, many categories of connected things in 2020 don't yet exist. As product designers dream up ways to exploit the inherent connectivity that will be offered in intelligent products, we expect the variety of devices offered to explode." Another question is what will happen to all the aging and increasingly insecure IoT devices, considering that frequent, automated software patching may not be practical, but new attacks will emerge continuously.  One solution would be to design these devices with an expiration date, i.e. they are programmed to stop working at the end of a specified lifetime. As with embedded devices, an industry analyst (on a phone call I had with Gartner) predicts that consumers will pressure vendors to provide better security because of the criticality of some of the IoT infrastructure and their impact on our physical world. An immediate market where consumer pressure will be strong is embedded systems for healthcare: Healthcare device companies are currently panicking and looking for solutions. In the mid-term, manufacturing is a market where consumer pressure will be strong is manufacturing, because of the use of robots, IoT etc.

Model-driven security: The perfect match! Turns out that we have implemented model-driven security (MDS) for IoT for over a decade, without using the terms IoT or M2M, esp. for air traffic management, intelligent transport systems, telecoms network management systems, utilities. This is because M2M environments are highly favorable for implementing MDS: For MDS to work well, a "system description" of some sort needs to be fed into the MDS model transformation process. While parts of this system description can be automatically detected, MDS is most effective if there is a repository that holds information about systems and their well-defined interconnections (as is the case for IoT/M2M). This differs from traditional office environments with general purpose desktops and servers, where interactions are made ad-hoc by users (e.g. emailing documents, retrieving documents from a network drive etc.).

So if you are wondering about what to do about the much talked about IoT security challenges, maybe it's time to consider model-driven security (MDS). Our IoT ready OpenPMF MDS product is an ideal candidate to implement effective IoT security. We have over 15 years of publications about this topic here, and can be contacted here

Model-driven security (MDS) for Privacy by Design (PbD)

I have blogged for years that model-driven security is an ideal approach for the management of policies that are relatively intuitive for humans, but cumbersome to technically implement across large, interconnected IT landscapes. Privacy policies are perfect examples of such policies - easy to say for example that "I only want my personal information to be collected if necessary for the purpose, only to be used for the purpose, and irrevocably deleted when the purpose has been completed". However, ensuring this technically involves a major access control infrastructure that, based on rich, contextual policies, controls information flows between applications and systems.
Our Open Privacy Framework (OPF) which we are currently developing with a number of partner companies, implementing the principles of Privacy by Design (PbD) (as well as NIST 800.53a, Appendix J and others).
Turns out that model-driven security is an ideal technology to traceably bridge the gap between these human-intuitive PbD policies and concrete technical implementation:
1) Policy management:  PbD needs a manageable intuitive, user-centric privacy policy authoring feature for users to set their privacy policies governing users, systems, applications, and interactions (information flows). It needs to allow users and administrators to author and/or select privacy policies captured in intuitive models (OMG-style Domain-Specific Languages, DSLs). MDS takes the privacy model, the generated system description, and other information as inputs into the MDS “model transformations” and automatically generates configurations for the various other components of the solution, and fine-grained access rules (which are information-flow based and attribute-based). To solve the management challenges of attribute-based access control (ABAC), and to turn human-intuitive, generic PbD policies into technically enforceable policy rules, we recommend the use of “model-driven security” (MDS) policy automation approaches: MDS helps simplify and automate security policy authoring and management, and automatically generates/updates fine-grained technical policy rules for the full technology stack. MDS is the tool supported process of modeling security requirements at a high level of abstraction, and using other information sources available about the system (produced by other stakeholders).
 2) Policy enforcement: PbD needs a tool that enforces the generated technical privacy rules and configurations across the IT landscape (e.g. using ABAC), across the information lifecycle and software development lifecycle.  MDS solves one of the main challenges around ABAC’s various management and implementation challenges.  It also needs to alert the proper people that something is happening that needs attention, or take action.
3) Policy compliance: PbD needs a user-centric tool that lets users verify (audit) that their policies are enforced correctly. This feature analyzes the traceable correspondence between technical security policy implementation (e.g. ABAC) and the information assurance requirements captured in “undistorted” requirements models (e.g. Common Criteria, control objectives). It also documents “supporting evidence” for accreditation/compliance purposes. It helps audit “as-is” processes & controls against the defined security policies for privacy. It uses “model-driven security” accreditation automation approaches to automatically correlate, analyze and document the traceable correspondence between technical security policy implementation and information assurance requirements captured in “undistorted” requirements models.
Please contact us if you have any further questions about MDS or PbD.

Monday, 21 July 2014

Externalized Authorization Management (EAM) in the "trough of disillusionment" - Model-driven security is a way out!

Gartner's "Hype Cycle for Governance, Risk and Compliance Technologies, 2014"* puts "Externalized Authorization Management" (EAM) in the "Trough of Disillusionment", stating that "Gartner clients have reported that maintaining a fine-grained access policy across a large number of custom or complex applications can be a daunting and expensive proposition". In line with this, we have discussed many times in most of our own publications for over a decade) is that authoring and maintaining a large number of fine-grained EAM access policies across many applications is expensive, difficult, time-consuming, error-prone, and non-traceable. This is especially true for attribute-based access control (ABAC), which supports extremely rich and flexible access rules.

However, it is important to note that Gartner's "Hype Cycle for Governance, Risk and Compliance, 2014" category is "Externalized Authorization Management", and not model-driven security (MDS) used with EAM. For over a decade, we have shown that MDS is an ideal candidate to resolve EAM's manageability/complexity challenges. Over the years, we have carried out many MDS deployments  (over EAM & ABAC) over the years, implementing highly complex policies. MDS works best if there is a human understanding of the policy that is general, intuitive, simple, and human-intuitive. MDS (like EAM & ABAC) requires access to a number of attribute sources (and often also attribute mappings) to turn this general human understanding of the policy into the detailed technical rules that can be technically enforced (e.g. by EAM, ABAC). This can be a challenge, however the resulting level of automation,  manageability, usability, support for system agility, and traceable compliance by far outweigh the costs.

An example we have recently designed and implemented for a customer is something we tentatively call "relationship-based access control" (RelBAC, we also call this a highly enhanced, multidimensional "proximity-based access control" model) - humans are usually great at expressing general access policies based on the relationship of themselves (or something they are associated with) with something they are trying to access (or something the accessed resource is associated with). Due to restrictions we cannot post too many details here, but please contact us if you would like to hear more about this.

* Gartner, Hype Cycle for Governance, Risk and Compliance Technologies, 17 July 2014, John A Wheeler

Friday, 11 April 2014

Attribute-Based Access Control (ABAC) adoption 70% by 2020 - Model-Driven Security helps make ABAC manageable!

Gartner (Gregg Kreizmann, at the Identity and Access Summit in Nov 2013) forecasts that by 2020, 70% of all businesses will use attribute-based access control (ABAC) as the dominant mechanism to protect critical assets, up from less than 5% today ABAC is about saying which good stuff should be allowed (whitelisting). This is contrary to most of what the security industry does today, which is saying which bad stuff should not be allowed (blacklisting). Blacklisting suffers from many issues, esp. around accuracy (false positives and false negatives).

However, while whitelisting a la ABAC does not have those problems, it frequently suffers from the complexity overload related to the authoring and maintenance/update of the many, complex, changing fine-grained access rules. As a result, ABAC hasn't taken off as much as it should have by now. Another issue is related to the complexity of the ABAC infrastructure: potentially many attributes have to be aligned, and attribute sources (PIPs) have to be plugged into the ABAC system, and the enforcement end (PEP) also needs to be plugged into the information flow. All in all, this is not a minor undertaking. But it is worth the effort in the long run (the same way IdM and PKI only materialized ROI after a while).

Model-driven security (MDS) policy automation is a critically important part of the ABAC story: It allows the specification of human-intuitive, generic, undistorted policy models, and automatically turns those into the fine-grained, technical ABAC rules. Part of the "secret sauce" is that MDS feeds in other information sources into this process to figure out what rules to generate. MDS also allows the automated checking for compliance/accreditation. See the wikipedia article and our website for details.

In summary, MDS fixes some of the complexity issues that become evident when ABAC gets deployed. Oh, and by the way, we are not the only ones saying that: industry analyst firm Gartner identifies model-driven security as part of "Top 10 Strategic Technologies for 2012", and selected OpenPMF, a model-driven security product, as "Cool Vendor 2008" product.

Wednesday, 30 January 2013

Model-Driven Security possible without Model-Driven Software Engineering? Of course!

Today I want to clarify that model-driven security (MDS) does not necessarily rely on model-driven development to work - even though it relies on application, system, and interaction models (so-called “functional models”) to achieve significant security policy automation. The traditional MDS approach is that these functional models ideally come from manually defined application models authored during model-driven development (e.g. UML, BPMN). But this is not necessary. We have designed an additional solution for our OpenPMF where the functional models are in fact obtained from an IT asset management tool that is part of our partner’s (Promia, Inc.) intrusion detection/prevention product Raven. This works well, and enables the use of model-driven security in environments which do not support model-driven development or where model-driven development is not desired.
While this may not sound like a big deal, it is in fact a big deal, because it increases the widespread applicability of model-driven security dramatically, and makes adoption a lot easier.

Tuesday, 29 January 2013

"Policy as a Service"

The financial ROI of Cloud security and compliance is judged by decision makers in end-user organizations by the same measures as is done for Cloud computing in general, i.e. by how much it cuts up-front capital expenditure and in-house manual maintenance cost. However, manually translating security policy into technical implementation is difficult, expensive, and error-prone (esp. for the application layer). In order to reduce security related manual maintenance cost at the end-user organization, security tools need to become more automated. With the emergence of Cloud PaaS, it is therefore logical to move all or parts of the model-driven security architecture into the Cloud to protect and audit Cloud applications and mashups with maximal automation. In particular, policies are provided as a Cloud service to application development and deployment tools (i.e. “Policy as a Service”), and policy automation is embedded into Cloud application deployment and runtime platforms (i.e. automated policy generation/update, enforcement, monitoring). Different Cloud deployment scenarios are possible, which differ from local non-Cloud deployments where model-driven security is conventionally installed within or alongside a locally installed development tool (e.g. Eclipse). Policy as a Service (see ObjectSecurity OpenPMF) involves five parts:
  1. Policy Configuration from the Cloud: Policy configurations are provided as subscription-based Cloud service to application development tools. Offering specification, maintenance, and update of policy models as a Cloud service to application developers and security experts has significant benefits: Most importantly, instead of having to specify (or buy and install) and maintain the policy models used for model-driven security on an on-going basis, application developers and security specialists can now simply subscribe to the kinds of policy feeds they require without the need to know the details of the models. The Policy as a Service provider (typically different from the Cloud provider) takes care of policy modeling, maintenance, and update. Other benefits are that the user organization does not need to be a security and compliance expert because the up-to-date policy models will be provided as a feed to them on an on-going basis, that the upfront cost hurdle is minimized thanks to the subscription model, and that there is no need by the end user organization to continually monitor regulations and best practices for changes.
  2. Automatic Technical Policy Generation in the Cloud: The automatic policy generation feature of MDS is integrated into the development, deployment, and mashup tools (to get access to functional application information). It consumes the policy feed described in the previous section. Platform as a Service (PaaS) sometimes includes both Cloud hosted development and mashup tools and a Cloud hosted runtime application platform. In this case, automatic technical policy generation using model-driven security (MDS) can also be moved into the Cloud, so that technical security policies can be automatically be generated for the applications during the Cloud hosted development, deployment and/or mashup process. This is in particular the case for mashup tools, because those tools are more likely to be Cloud hosted, are often graphical and/or model-driven, and are concerned with interactions and information flows between Cloud services. If the development tools are not hosted on the PaaS Cloud, then the MDS technical policy auto-generation feature needs to be integrated into the local development tools.
  3. Automatic Security Policy Enforcement in the Cloud: Policy enforcement should naturally be integrated into the PaaS application platform so that the generated technical policies are automatically enforced whenever Cloud services are accessed. As described in the previous section, policies are either generated within Cloud using hosted MDS and PaaS development tools, or are uploaded from local MDS and development tools. How policy enforcement points are built into the PaaS application platform depends on whether the PaaS application platform (1) allows the installation of a policy enforcement point (e.g. various open source PaaS platforms, e.g. see case studies below), (2) supports a standards based policy enforcement point (e.g. OASIS XACML), or (3) supports a proprietary policy enforcement point.
  4. Automatic Policy Monitoring into the Cloud: Policy enforcement points typically raise security related runtime alerts, especially about incidents related to invocations that have been blocked. The collection, analysis and visual representation of those alerts can also be moved into the Cloud. This has numerous benefits: Incidents can be centrally analyzed for multiple Cloud services together with other information (e.g. network intrusion detection). Also, an integrated visual representation of the security posture across multiple Cloud services can be provided, integrated incident information can be stored for auditing purposes, and compliance related decision support tools can be offered as a Cloud service.
  5. Automatic Updating: The described model-driven approach enables automatic updates of technical security policy enforcement and auditing whenever applications and especially their interactions, change. The same automation is possible when security policy requirements change.

 Publications about this can be found in the ISSA Journal October 2010 and on IBM developerWorks. Contact me if you would like to know more information about Policy as a Service.

AuthoriZation Based Access Control (ZBAC) and model-driven security

Today I would like to share my views about authoriZation Based Access Control (ZBAC) and how it relates to model-driven security policy automation. There are numerous incarnations of the basic idea: an authorization server issues cryptographically signed tokens for other parties upon request, which are used as an access attribute source when access decisions are made. A standard that can be used to implement authorization tokens is OAuth – however, unfortunately it is often less-than-optimally used to implement what really is a single-sign-on (SSO) authentication assertion token (SAML was originally designed to do that!). The difference is this: SSO is when I authenticate to a central service, which then confirms that I am who I am to whatever service I want to access. ZBAC is more like asking a central service for some access permissions, which can then be used to access a service. A ZBAC token is more like a car key, which gives an authorization to unlock a car to whoever holds the key. While the token should typically be signed and bound to an identity, the holder of the token should be able to issue that authorization (or subsets of it) to other parties (i.e. the car does not care if I lend you my car key – it will still unlock). This is clearly critical for authorization delegation and other necessary features in today’s interconnected IT application chains. Alan Karp et al have written some great reports about the concept and its uses (see “From ABAC to ZBAC”, and "Authorization-Based Access Control for the Services Oriented Architecture").

How does this relate to model-driven security? In fact the main problem model-driven security solves still remains: how can you author the policies that go into the authorization tokens, and how can you maintain a correct policy without a maintenance nightmare. Model-driven security, as implemented for example by ObjectSecurity OpenPMF can do this for both Authorization-Based Access Control (ZBAC) and attribute-based access control (ABAC, e.g. XACML), role/identity-based access control (RBAC/IBAC) in a unified fashion: You author policies in models, and model-driven security policy automation then generates the policy rules that drive (1) ZBAC: the authorization server’s decisioning as to which permissions to add to whose authorization tokens, and (2) ABAC: the policy decision point (PDP) that protects the accessed resource via a policy enforcement point (PEP) based on the policy. In summary, model-driven security can be used as a great mechanism to bring various policy models, such as ZBAC and ABAC, under one unified umbrella. This keeps the maintenance effort and error-potential low.

Monday, 5 November 2012

On Resource-Based Access Control (ResBAC) vs. Identity-Based Access Control (IBAC)

Most technical access control policies today are specified "identity-centric", a term I use for policies that are specified with the accessor (= the calling identity) in mind. This is the "who?", "who is accessing?", "who can do what?" etc. in the question that the policy answers.  One of the reasons for doing that is because the information security industry has advocated identity-based access control (IBAC) for two decades or longer by now, involving identiy management (IdM), identity & access management (IAM), federated identity management, single sign-on, role-based access control (RBAC) etc. After such a long time of advocacy and education, the "collective mindset" of the information security industry has been shaped by the solution, rather than by the problem it is trying to solve. Now that we have deployed the "hammer" (IAM/RBAC), everything looks like a "nail" (identity-centric), and the industry keeps on "bolting" access policies onto identity systems. Unfortunately, most information security professionals will agree with the fact that identity-centric access control is not as easy to deploy and manage at scale as it seems...

Turns out that it is often easier to start stating your access policies with the protected resources in mind ("Resource-based access control", ResBAC), and then "bolt" the attributes that are relevant for access to the protected resource onto that resource. The questions now change to "how is the resource to be protected?", "what is allowed?", "what are the conditions for access to the resource?" etc. This shift in thinking about and structuring the policy can simplify things because (1) the required protection is often more clear and logical, and thus easier to specify this way, and (2) the access policy usually gets decided and enforced at the protected resource, so it is easier to distribute resource-based access policies to the place of decisioning/enforcement. Note that identities still play a critical role, but in the structure of the policy they are one of potentially many attributes (in the context of attribute based access control, ABAC) that determine access to the protected resource. 

Model-driven security (MDS) is an ideal tool for implementing resource-based access control (ResBAC) efficiently and effectively. This is because policies can be expressed resource-based in policy models, which are then automatically turned into the matching technical access policy rules for a particular system (e.g. SOA/cloud mash-up), and distributed to the protected resources for run-time enforcement. 

Thursday, 22 March 2012

Study estimates 59% accreditation cost saving using automated "Correct by Construction (CxC)" tools (& more for agile SOA/Cloud)

I recently found an interesting technical article by the U.S. National Security Agency's (NSA) Tim Kremann  in "The Next Wave, "The National Security Agency's Review of Emerging Technologies, Vol 19 No 1, 2011", "High Confidence Software and Systems". The article titled "Correct by Construction: Advanced  Software Engineering" (p. 22ff) argues that a correct-by-construction (CxC) methodology, such as model-driven software engineering, can improve assurance and reduce the time and cost to certify assurance (incl. security and correctness). In particular, the article quotes a study by Kestrel Institute, which showed the the automatic generation of certification documents (incl. Common Criteria supporting evidence) as part of a CxC methodology can be estimated to result in "an average overall cost reduction of about 59 percent per certification application due to using CxC methods". Overall, this lead the Kestrel researchers to "conclude that a CxC process will produce a certified product for roughly 30-40 percent of the cost of a conventional process".

These findings are in line with our numerous real-world experiences of using model-driven security approaches to automate both technical security policy implementation (MDS) and accreditation evidence generation (MDSA) (alongside model-driven service orchestration or model-driven development). However, our own empirical research has shown that the cost reduction is much higher than 59% (compared to manual approaches) for agile IT landscapes, especially Service Oriented Architectures (SOAs) "system of systems" and the very related Cloud PaaS mash-up "system of systems". The following argument makes the accreditation/re-accreditation cost reduction evident:
(1) We can save 59% percent during the first-time accreditation as Kestrel researchers mentioned above identified, using an automated model-driven approach (MDS/MDSA);
(2) The manual re-accreditation cost after agile SOA orchestration/Cloud mash-up changes can be significant, because changes across the entire "system of systems" need to be analyzed in order to figure out what impacts re-accreditation and how;
(3) Doing this analysis and change evidence generation automatically based on all the application/interaction/system models, security models, and accreditation requirements models reduces that cost dramatically: it can be close to zero if the automated analysis finds out that the changes do not impact the current accreditation; If manual re-accreditation is necessary, the cost is dramatically reduced because the supporting evidence and a summary of changes and their impacts are automatically produced.

Please contact us if you have any questions about model-driven security or about this blog post.

Friday, 20 January 2012

Analysis Series: "Security Recommendations for Cloud Computing Providers" (German Federal Office for Information Security)

In this post I would like to share my views of the "Authorisation" section (p. 37 in the English version) of the German Bundesamt für Sicherheit in der Informationstechnik (BSI) (Federal Office for Information Security) white paper "Security Recommendations for Cloud Computing Providers (Minimum information security requirements)"(download, German & English). The section states:

The rights management system must ensure that each role may only see the data (including meta-data) required to achieve the task. The access control should be role-based and the roles and authorisations set up should be reviewed regularly. In general, the least privilege model should be used, with users and CSP administrators only possessing the rights that they require to achieve their tasks. Particular attention should be directed here towards privileged users. If the role is that of a CSP administrator, it should be possible to demonstrate that the only data viewed was that which was required for the task. The rights management system should also be capable of fully documenting and monitoring data exports and imports from and to the CSP. Lastly, any particularly critical administration activities, such as installing patches, should only be performed on the four-eye principle

As with most other guidance documents previously analyzed on this blog post series, this section makes strong requirements statements at a high level ("only see the data...required to achieve the task", "least privilege"), but at the same time recommends only less-than-optional technical controls ("access control should be role-based"). As previously identified with other guidance documents, the recommendations fail to address the fact that access control needs to be highly contextual to achieve effective "least privilege" for a particular task. Granting role-based access to some particular job function based on everything that job function might ever potentially need to access for all tasks that job function might ever do is not effective "least privilege!

In order to minimize access rights to exactly what is needed to be accessed ("least privilege") in a particular situation ("task"), the context of the task, as well as other environmental context (such as time of day, crisis level, a particular patient checked into hospital of the treating doctor who wants to access that patient's health record etc).

The fact that this (and other) government issued guidance does not address the issue that traditional access controls (incl. role-based access control) are only partly effective to achieve "least privilege" is unfortunate. It allows enterprises to continue to get away with not really solving the real underlying security challenges they are facing, with customers having to pay the price for the damage caused by these only partly effective security measures.

However, real solutions are available today: Model-driven security policy automation (e.g. OpenPMF) together fine-grained, contextual authorization management (e.g. XACML) help implement real contextual, fine-grained access controls in a manageable way. Model-driven security alleviates the main challenge of authorization management, which is that policies for fine-grained, contextual authorization management are hard to manage and maintain, even for dynamically changing (agile) IT landscapes such as Service Oriented Architectures (SOAs) and Cloud mash-ups. Please feel free to read this blog, our website, or contact me.

Thursday, 15 December 2011

Friday, 9 December 2011

Gartner identifies model-driven security as part of "Top 10 Strategic Technologies for 2012"

Industry analyst firm Gartner identifies model-driven security as part of "Top 10 Strategic Technologies for 2012", in the context of "Contextual and Social User Experience". They write "A contextually aware system anticipates the user’s needs and proactively serves up the most appropriate and customized content, product or service. Context can be used to link mobile, social, location, payment and commerce. It can help build skills in augmented reality, model-driven security and ensemble applications." (source: Gartner Identifies the Top 10 Strategic Technologies for 2012). It is great to see that industry analysts continue to acknowledge the significant potential of model-driven security to automate the technical implementation of contextual, rich, and expressive security policies. Read more about model-driven security on this blog, or contact us for more information.

Monday, 1 August 2011

Analysis Series: NISTIR 7628 Smart Grid Security Recommendations

In this “analysis series” blog post, I will focus on US NIST’s 537-page "Guidelines for SmartGrid Cyber Security" (NIST IR 7628). Here are some interesting recommended controls I have analyzed:
  1. Least privilege access control: The recommended control “Least Privilege” (NIST IR 7628 - SG.AC-7) requires that “the organization assigns the most restrictive set of rights and privileges or access needed by users for the performance of specified tasks”, and that “the organization configures the smart grid information system to enforce the most restrictive set of rights and privileges or access needed by users”. In other words, a caller should only be granted access to a resource if that caller has a need to do so in the specific context, for example a particular step in a business process, or a particular system situation such as emergency level.
  2. Information flow enforcement: The recommended control “Information Flow Enforcement” (NIST IR 7628 - SG.AC-5) requires that the smart grid information system enforces assigned authorizations for controlling the flow of information within the smart grid information system and between interconnected smart grid information systems in accordance with applicable policy. Information flow control regulates where information is allowed to travel within a smart grid information system and between smart grid information systems. As example implementations, the document mentions boundary protection devices that restrict smart grid information system services or provide a packet-filtering capability. This section of the document also offers a number of supplemental considerations. Particularly interesting for the discussion in this paper, the guidance recommends “dynamic information flow control allowing or disallowing information flows based on changing conditions or operational considerations”.
  3. Incident monitoring, incident reporting, and auditing: Related to achieving visibility, numerous recommendations for incident monitoring, incident reporting, and auditing are spread throughout the NIST IR 7628 document. For example:  “smart grid Information System Monitoring Tools and Techniques” (SG.SI-4) requires that “the organization monitors events … to detect attacks, unauthorized activities or conditions, and non-malicious errors” based on the organization’s “monitoring objectives and the capability of the smart grid information system to support such activities”. The supplemental guidance states that this can be achieved through a variety of tools and techniques (e.g., intrusion detection systems, intrusion prevention systems, malicious code protection software, log monitoring software, network monitoring software, and network forensic analysis tools), and can include real-time alerting. “Incident Monitoring” (SG.IR-6) requires that “the organization tracks and documents … security incidents”, maybe using “automated mechanisms to assist in the tracking of security incidents and in the collection and analysis of incident information”. “Incident Reporting” (SG.IR-7) requires incident reporting procedures about what is an incident, granularity of incident information, who receives it etc., again potentially employing “automated mechanisms to assist in the reporting of security incidents”. “Auditable Events” (SG.AU-2): to identify events that need to be auditable as significant and relevant, requires the development and review of a list of auditable events on an organization-defined frequency, including execution of privileged functions. “Audit Monitoring, Analysis, and Reporting” (SG.AU-6) requires audit record reviews and analyses to find and report inappropriate or unusual activity, potentially employing automated, centralized analysis tools. “Audit Reduction and Report Generation” (SG.AU-7) supports near real-time analysis and after-the-fact investigations of security incidents, e.g. by automatically processing audit records for events of interest based on selectable event criteria. “Audit Generation” (SG.AU-15) recommends audit record generation capability, potentially from multiple components into a system-wide audit trail that is time-correlated.
All this makes sense, but is easier to write about than to actually implement, esp. at the scale of a smart grid. Let’s discuss each on turn to see how model-driven security policy automation can help implement these recommendations effectively:
  1. Least privilege access control: What this specifically means is that a dynamic access control “whitelist” (i.e. stating what is allowed, vs. “blacklists” that state what is not allowed) needs to be available that enforces the that policy requirement. Static access control models such as identity-based access control (IBAC) or role-based access control (RBAC) are not sufficient access mechanisms because they do not capture such context in the policy. As a result, virtually all IBAC/RBAC implementations, including traditional Identity and Access Management (IAM) technologies, are insufficient on their own. Attribute-based access control (ABAC), as for example standardized in XACML, help add this missing context and other additional expressions to the policy. The flipside of ABAC is that those fine-grained contextual authorization policies are extremely difficult, time-consuming, and error-prone for human administrators to manually author and maintain. Model-driven security policy automation as implemented in OpenPMF can solve the unmanageability problem of ABAC and ZBAC.
  2. Information flow enforcement: As already mentioned above, IBAC and RBAC are insufficient on their own, and due to the inherent changing (“agile”) nature of today’s interconnected IT landscapes (“system of systems”), ABAC policies would need to be constantly manually updated to be correct after “system of systems” changes, resulting in a policy management nightmare. There are a number of other problems with ABAC, e.g. challenges around authorization delegation across service chains and impersonation, which can be solved using authorization-based access control (ZBAC), which uses authorization tokens and federated authorization token servers. Model-driven security policy automation as implemented in OpenPMF can solve the unmanageability problem of ABAC and ZBAC.
  3. Incident monitoring, incident reporting, and auditing: In the context of the fine-grained contextual authorization mentioned earlier, incident monitoring, reporting, and audit are intrinsically intertwined with authorization. Monitoring, reporting, and audit tools will need to know the specific authorization policies in order to decide whether behaviour is in fact suspicious or not. This differs dramatically from traditional monitoring approaches which mainly monitor for generic vulnerabilities (i.e. the same vulnerabilities occur for a particular technology, rather than for a particular business) and thus do not need to know any specifics about the organization’s business processes in order to flag an incident. I call control and visibility for generic vulnerabilities “security hygiene” to distinguish them from organization-specific policy enforcement and monitoring. Model-driven security incident monitoring and analysis, as implemented in OpenPMF, can solve the policy-driven monitoring challenge for authorization management compliance.
I hope you enjoyed this analysis, comments are of course always appreciated.