Model Driven Security Policy Automation

On this blog, ObjectSecurity co-founder and CEO Ulrich Lang discusses security policy automation and model-driven security. The aim of this blog is to advocate advance the state of the art in this area through exchange of ideas. www.modeldrivensecurity.org - www.policyautomation.org - www.objectsecurity.com

Tuesday, 22 July 2014

"Internet of Things" and Model-Driven Security - a perfect match!

In this unusually lengthy blog post I discuss how model-driven security is (and has always been) a perfect match for the Internet of Things. We are just wrapping up an embedded systems security consulting project, and are working hard in another large project to bundle our OpenPMF model-driven security policy automation product for easy adoption for IoT. OpenPMF has also supported industrial IoT middleware platforms such as DDS for many years. Furthermore we are working on some privacy by design opportunities with various partners. In this post you will see how all this comes together beautifully :) If you already know IoT and IoT security well, feel free to skip the next two paragraphs.

Internet of Things (IoT): The IT industry is currently at a significant pivotal point: The miniaturization and commoditization of hardware (with sensors and actuators), together with “always-on” network connectivity, and software everywhere. A current buzzword for this trend is the "Internet of Things" (IoT), the network of physical objects that contain embedded technology to communicate and sense or interact with their internal states or the external environment.”. IoT is about embedded interconnected systems and applications, with mostly "machine-to-machine" (M2M) interactions (the traditional internet has many more user-to-machine interactions). Excluding PCs, tablets and smartphones, IoT is forecasted to grow to 26 billion units installed in 2020 (representing almost 30x from 0.9 billion in 2009,  IoT product & service suppliers incremental revenue exceeding $300 billion, resulting in $1.9 trillion in global economic value-add through sales into diverse end markets). IoT blurs the line between traditional electronic devices and software-driven computers. IoT includes many safety-critical and mission-critical industries, such as smart grid, smart cities, smart homes/home automation, security systems, “wearables” (a very fast-growing 78.4% CAGR market segment that includes e.g. smart watches, health monitors etc.), just to name a few. Analysts predict that much of the IoT will be built from software running on standardized, network-connected hardware platforms (not too different from the PC world of the 80s/90s). Costs for such embedded hardware devices is so low that standard hardware (essentially a very small, fully functional computer) is often more cost-efficient and faster-to-market than purpose-built hardware. As a consequence, there is now software in consumer electronics such as internet-enabled power plug adapters, internet-enabled light bulbs (e.g. Philips Hue), internet-enabled sprinkler systems (e.g. Cyber Rain Smart Irrigation Controllers), internet-enabled home security systems (e.g. ADT Pulse), just to name a few. More traditional applications for embedded systems (e.g. cars) also see an increasing focus on software to drive functionality. 

IoT Security: IoT security is currently mostly poor, remains a significant technical challenge and a significant, mostly untapped market opportunity. To showcase just one example of poor/failed IoT security, authorities in several U.S. states are reporting that a hacker has once again broken into and defaced electronic road signs over highways in several U.S. states. June 2014, news media in North Carolina reported that at least three highway signs there had apparently been compromised and re-worded to read "Hack by Sun Hacker." Similar incidents were reported between May 27 and June 2, 2014 in two other states, which spotted variations on that message left by the perpetrator, (including an invitation to chat with him on Twitter). “A variety of network-controlled home automation devices lack basic security controls, making it possible for attackers to access their sensitive functions, often from the Internet. Some of these devices are used to control door locks, surveillance cameras, alarm systems, lights and other sensitive systems. Home automation systems are often connected to security devices, so they are part of the overall security of a home. Because of this, they should have security controls built into them. Companies that manufacture these systems are trying to get their products to market as fast as possible, and they often overlook security testing because it impedes that process” (source). An analyst states that  “engineering staff at embedded device OEMs rate security as the single most common obstacle to their organizations developing connected products”, “two-thirds of embedded engineers say that security is very important or extremely important to their customers”, “fewer than half of engineering firms conduct penetration testing of their embedded devices”, “security related software and hardware represent 5% or less of Bill of Materials costs for most embedded engineering projects, but that proportion is expected to nearly double over the next three years”, and “although security requirements add to development time and costs, OEMs are successfully able to raise prices in response”. A good paper on the lack of cyber security (and what needs to be done to fix it) for power grids has been written by RAD. Another good paper about the implication of the evident lack of security in enterprise-connected mobile and embedded devices is here. “Due to the low cost of adding IoT capability to consumer products, Gartner expects that "ghost" devices with unused connectivity will be common. This will be a combination of products that have the capability built in but require software to "activate" it and products with IoT functionality that customers do not actively leverage. In addition, enterprises will make extensive use of IoT technology, and there will be a wide range of products sold into various markets, such as advanced medical devices; factory automation sensors and applications in industrial robotics; sensor motes for increased agricultural yield; and automotive sensors and infrastructure integrity monitoring systems for diverse areas, such as road and railway transportation, water distribution and electrical transmission.  "By 2020, component costs will have come down to the point that connectivity will become a standard feature, even for processors costing less than $1. This opens up the possibility of connecting just about anything, from the very simple to the very complex, to offer remote control, monitoring and sensing," said Mr. Middleton. "The fact is, that today, many categories of connected things in 2020 don't yet exist. As product designers dream up ways to exploit the inherent connectivity that will be offered in intelligent products, we expect the variety of devices offered to explode." Another question is what will happen to all the aging and increasingly insecure IoT devices, considering that frequent, automated software patching may not be practical, but new attacks will emerge continuously.  One solution would be to design these devices with an expiration date, i.e. they are programmed to stop working at the end of a specified lifetime. As with embedded devices, an industry analyst (on a phone call I had with Gartner) predicts that consumers will pressure vendors to provide better security because of the criticality of some of the IoT infrastructure and their impact on our physical world. An immediate market where consumer pressure will be strong is embedded systems for healthcare: Healthcare device companies are currently panicking and looking for solutions. In the mid-term, manufacturing is a market where consumer pressure will be strong is manufacturing, because of the use of robots, IoT etc.

Model-driven security: The perfect match! Turns out that we have implemented model-driven security (MDS) for IoT for over a decade, without using the terms IoT or M2M, esp. for air traffic management, intelligent transport systems, telecoms network management systems, utilities. This is because M2M environments are highly favorable for implementing MDS: For MDS to work well, a "system description" of some sort needs to be fed into the MDS model transformation process. While parts of this system description can be automatically detected, MDS is most effective if there is a repository that holds information about systems and their well-defined interconnections (as is the case for IoT/M2M). This differs from traditional office environments with general purpose desktops and servers, where interactions are made ad-hoc by users (e.g. emailing documents, retrieving documents from a network drive etc.).

So if you are wondering about what to do about the much talked about IoT security challenges, maybe it's time to consider model-driven security (MDS). Our IoT ready OpenPMF MDS product is an ideal candidate to implement effective IoT security. We have over 15 years of publications about this topic here, and can be contacted here


Model-driven security (MDS) for Privacy by Design (PbD)

I have blogged for years that model-driven security is an ideal approach for the management of policies that are relatively intuitive for humans, but cumbersome to technically implement across large, interconnected IT landscapes. Privacy policies are perfect examples of such policies - easy to say for example that "I only want my personal information to be collected if necessary for the purpose, only to be used for the purpose, and irrevocably deleted when the purpose has been completed". However, ensuring this technically involves a major access control infrastructure that, based on rich, contextual policies, controls information flows between applications and systems.
Our Open Privacy Framework (OPF) which we are currently developing with a number of partner companies, implementing the principles of Privacy by Design (PbD) (as well as NIST 800.53a, Appendix J and others).
Turns out that model-driven security is an ideal technology to traceably bridge the gap between these human-intuitive PbD policies and concrete technical implementation:
1) Policy management:  PbD needs a manageable intuitive, user-centric privacy policy authoring feature for users to set their privacy policies governing users, systems, applications, and interactions (information flows). It needs to allow users and administrators to author and/or select privacy policies captured in intuitive models (OMG-style Domain-Specific Languages, DSLs). MDS takes the privacy model, the generated system description, and other information as inputs into the MDS “model transformations” and automatically generates configurations for the various other components of the solution, and fine-grained access rules (which are information-flow based and attribute-based). To solve the management challenges of attribute-based access control (ABAC), and to turn human-intuitive, generic PbD policies into technically enforceable policy rules, we recommend the use of “model-driven security” (MDS) policy automation approaches: MDS helps simplify and automate security policy authoring and management, and automatically generates/updates fine-grained technical policy rules for the full technology stack. MDS is the tool supported process of modeling security requirements at a high level of abstraction, and using other information sources available about the system (produced by other stakeholders).
 2) Policy enforcement: PbD needs a tool that enforces the generated technical privacy rules and configurations across the IT landscape (e.g. using ABAC), across the information lifecycle and software development lifecycle.  MDS solves one of the main challenges around ABAC’s various management and implementation challenges.  It also needs to alert the proper people that something is happening that needs attention, or take action.
3) Policy compliance: PbD needs a user-centric tool that lets users verify (audit) that their policies are enforced correctly. This feature analyzes the traceable correspondence between technical security policy implementation (e.g. ABAC) and the information assurance requirements captured in “undistorted” requirements models (e.g. Common Criteria, control objectives). It also documents “supporting evidence” for accreditation/compliance purposes. It helps audit “as-is” processes & controls against the defined security policies for privacy. It uses “model-driven security” accreditation automation approaches to automatically correlate, analyze and document the traceable correspondence between technical security policy implementation and information assurance requirements captured in “undistorted” requirements models.
Please contact us if you have any further questions about MDS or PbD.

Monday, 21 July 2014

Externalized Authorization Management (EAM) in the "trough of disillusionment" - Model-driven security is a way out!

Gartner's "Hype Cycle for Governance, Risk and Compliance Technologies, 2014"* puts "Externalized Authorization Management" (EAM) in the "Trough of Disillusionment", stating that "Gartner clients have reported that maintaining a fine-grained access policy across a large number of custom or complex applications can be a daunting and expensive proposition". In line with this, we have discussed many times in most of our own publications for over a decade) is that authoring and maintaining a large number of fine-grained EAM access policies across many applications is expensive, difficult, time-consuming, error-prone, and non-traceable. This is especially true for attribute-based access control (ABAC), which supports extremely rich and flexible access rules.

However, it is important to note that Gartner's "Hype Cycle for Governance, Risk and Compliance, 2014" category is "Externalized Authorization Management", and not model-driven security (MDS) used with EAM. For over a decade, we have shown that MDS is an ideal candidate to resolve EAM's manageability/complexity challenges. Over the years, we have carried out many MDS deployments  (over EAM & ABAC) over the years, implementing highly complex policies. MDS works best if there is a human understanding of the policy that is general, intuitive, simple, and human-intuitive. MDS (like EAM & ABAC) requires access to a number of attribute sources (and often also attribute mappings) to turn this general human understanding of the policy into the detailed technical rules that can be technically enforced (e.g. by EAM, ABAC). This can be a challenge, however the resulting level of automation,  manageability, usability, support for system agility, and traceable compliance by far outweigh the costs.

An example we have recently designed and implemented for a customer is something we tentatively call "relationship-based access control" (RelBAC, we also call this a highly enhanced, multidimensional "proximity-based access control" model) - humans are usually great at expressing general access policies based on the relationship of themselves (or something they are associated with) with something they are trying to access (or something the accessed resource is associated with). Due to restrictions we cannot post too many details here, but please contact us if you would like to hear more about this.

* Gartner, Hype Cycle for Governance, Risk and Compliance Technologies, 17 July 2014, John A Wheeler

Friday, 11 April 2014

Attribute-Based Access Control (ABAC) adoption 70% by 2020 - Model-Driven Security helps make ABAC manageable!

Gartner (Gregg Kreizmann, at the Identity and Access Summit in Nov 2013) forecasts that by 2020, 70% of all businesses will use attribute-based access control (ABAC) as the dominant mechanism to protect critical assets, up from less than 5% today ABAC is about saying which good stuff should be allowed (whitelisting). This is contrary to most of what the security industry does today, which is saying which bad stuff should not be allowed (blacklisting). Blacklisting suffers from many issues, esp. around accuracy (false positives and false negatives).

However, while whitelisting a la ABAC does not have those problems, it frequently suffers from the complexity overload related to the authoring and maintenance/update of the many, complex, changing fine-grained access rules. As a result, ABAC hasn't taken off as much as it should have by now. Another issue is related to the complexity of the ABAC infrastructure: potentially many attributes have to be aligned, and attribute sources (PIPs) have to be plugged into the ABAC system, and the enforcement end (PEP) also needs to be plugged into the information flow. All in all, this is not a minor undertaking. But it is worth the effort in the long run (the same way IdM and PKI only materialized ROI after a while).

Model-driven security (MDS) policy automation is a critically important part of the ABAC story: It allows the specification of human-intuitive, generic, undistorted policy models, and automatically turns those into the fine-grained, technical ABAC rules. Part of the "secret sauce" is that MDS feeds in other information sources into this process to figure out what rules to generate. MDS also allows the automated checking for compliance/accreditation. See the wikipedia article and our website for details.

In summary, MDS fixes some of the complexity issues that become evident when ABAC gets deployed. Oh, and by the way, we are not the only ones saying that: industry analyst firm Gartner identifies model-driven security as part of "Top 10 Strategic Technologies for 2012", and selected OpenPMF, a model-driven security product, as "Cool Vendor 2008" product.

Wednesday, 30 January 2013

Model-Driven Security possible without Model-Driven Software Engineering? Of course!


Today I want to clarify that model-driven security (MDS) does not necessarily rely on model-driven development to work - even though it relies on application, system, and interaction models (so-called “functional models”) to achieve significant security policy automation. The traditional MDS approach is that these functional models ideally come from manually defined application models authored during model-driven development (e.g. UML, BPMN). But this is not necessary. We have designed an additional solution for our OpenPMF where the functional models are in fact obtained from an IT asset management tool that is part of our partner’s (Promia, Inc.) intrusion detection/prevention product Raven. This works well, and enables the use of model-driven security in environments which do not support model-driven development or where model-driven development is not desired.
While this may not sound like a big deal, it is in fact a big deal, because it increases the widespread applicability of model-driven security dramatically, and makes adoption a lot easier.

Tuesday, 29 January 2013

"Policy as a Service"


The financial ROI of Cloud security and compliance is judged by decision makers in end-user organizations by the same measures as is done for Cloud computing in general, i.e. by how much it cuts up-front capital expenditure and in-house manual maintenance cost. However, manually translating security policy into technical implementation is difficult, expensive, and error-prone (esp. for the application layer). In order to reduce security related manual maintenance cost at the end-user organization, security tools need to become more automated. With the emergence of Cloud PaaS, it is therefore logical to move all or parts of the model-driven security architecture into the Cloud to protect and audit Cloud applications and mashups with maximal automation. In particular, policies are provided as a Cloud service to application development and deployment tools (i.e. “Policy as a Service”), and policy automation is embedded into Cloud application deployment and runtime platforms (i.e. automated policy generation/update, enforcement, monitoring). Different Cloud deployment scenarios are possible, which differ from local non-Cloud deployments where model-driven security is conventionally installed within or alongside a locally installed development tool (e.g. Eclipse). Policy as a Service (see ObjectSecurity OpenPMF) involves five parts:
  1. Policy Configuration from the Cloud: Policy configurations are provided as subscription-based Cloud service to application development tools. Offering specification, maintenance, and update of policy models as a Cloud service to application developers and security experts has significant benefits: Most importantly, instead of having to specify (or buy and install) and maintain the policy models used for model-driven security on an on-going basis, application developers and security specialists can now simply subscribe to the kinds of policy feeds they require without the need to know the details of the models. The Policy as a Service provider (typically different from the Cloud provider) takes care of policy modeling, maintenance, and update. Other benefits are that the user organization does not need to be a security and compliance expert because the up-to-date policy models will be provided as a feed to them on an on-going basis, that the upfront cost hurdle is minimized thanks to the subscription model, and that there is no need by the end user organization to continually monitor regulations and best practices for changes.
  2. Automatic Technical Policy Generation in the Cloud: The automatic policy generation feature of MDS is integrated into the development, deployment, and mashup tools (to get access to functional application information). It consumes the policy feed described in the previous section. Platform as a Service (PaaS) sometimes includes both Cloud hosted development and mashup tools and a Cloud hosted runtime application platform. In this case, automatic technical policy generation using model-driven security (MDS) can also be moved into the Cloud, so that technical security policies can be automatically be generated for the applications during the Cloud hosted development, deployment and/or mashup process. This is in particular the case for mashup tools, because those tools are more likely to be Cloud hosted, are often graphical and/or model-driven, and are concerned with interactions and information flows between Cloud services. If the development tools are not hosted on the PaaS Cloud, then the MDS technical policy auto-generation feature needs to be integrated into the local development tools.
  3. Automatic Security Policy Enforcement in the Cloud: Policy enforcement should naturally be integrated into the PaaS application platform so that the generated technical policies are automatically enforced whenever Cloud services are accessed. As described in the previous section, policies are either generated within Cloud using hosted MDS and PaaS development tools, or are uploaded from local MDS and development tools. How policy enforcement points are built into the PaaS application platform depends on whether the PaaS application platform (1) allows the installation of a policy enforcement point (e.g. various open source PaaS platforms, e.g. see case studies below), (2) supports a standards based policy enforcement point (e.g. OASIS XACML), or (3) supports a proprietary policy enforcement point.
  4. Automatic Policy Monitoring into the Cloud: Policy enforcement points typically raise security related runtime alerts, especially about incidents related to invocations that have been blocked. The collection, analysis and visual representation of those alerts can also be moved into the Cloud. This has numerous benefits: Incidents can be centrally analyzed for multiple Cloud services together with other information (e.g. network intrusion detection). Also, an integrated visual representation of the security posture across multiple Cloud services can be provided, integrated incident information can be stored for auditing purposes, and compliance related decision support tools can be offered as a Cloud service.
  5. Automatic Updating: The described model-driven approach enables automatic updates of technical security policy enforcement and auditing whenever applications and especially their interactions, change. The same automation is possible when security policy requirements change.

 Publications about this can be found in the ISSA Journal October 2010 and on IBM developerWorks. Contact me if you would like to know more information about Policy as a Service.

AuthoriZation Based Access Control (ZBAC) and model-driven security

Today I would like to share my views about authoriZation Based Access Control (ZBAC) and how it relates to model-driven security policy automation. There are numerous incarnations of the basic idea: an authorization server issues cryptographically signed tokens for other parties upon request, which are used as an access attribute source when access decisions are made. A standard that can be used to implement authorization tokens is OAuth – however, unfortunately it is often less-than-optimally used to implement what really is a single-sign-on (SSO) authentication assertion token (SAML was originally designed to do that!). The difference is this: SSO is when I authenticate to a central service, which then confirms that I am who I am to whatever service I want to access. ZBAC is more like asking a central service for some access permissions, which can then be used to access a service. A ZBAC token is more like a car key, which gives an authorization to unlock a car to whoever holds the key. While the token should typically be signed and bound to an identity, the holder of the token should be able to issue that authorization (or subsets of it) to other parties (i.e. the car does not care if I lend you my car key – it will still unlock). This is clearly critical for authorization delegation and other necessary features in today’s interconnected IT application chains. Alan Karp et al have written some great reports about the concept and its uses (see “From ABAC to ZBAC”, and "Authorization-Based Access Control for the Services Oriented Architecture").

How does this relate to model-driven security? In fact the main problem model-driven security solves still remains: how can you author the policies that go into the authorization tokens, and how can you maintain a correct policy without a maintenance nightmare. Model-driven security, as implemented for example by ObjectSecurity OpenPMF can do this for both Authorization-Based Access Control (ZBAC) and attribute-based access control (ABAC, e.g. XACML), role/identity-based access control (RBAC/IBAC) in a unified fashion: You author policies in models, and model-driven security policy automation then generates the policy rules that drive (1) ZBAC: the authorization server’s decisioning as to which permissions to add to whose authorization tokens, and (2) ABAC: the policy decision point (PDP) that protects the accessed resource via a policy enforcement point (PEP) based on the policy. In summary, model-driven security can be used as a great mechanism to bring various policy models, such as ZBAC and ABAC, under one unified umbrella. This keeps the maintenance effort and error-potential low.

Monday, 5 November 2012

On Resource-Based Access Control (ResBAC) vs. Identity-Based Access Control (IBAC)


Most technical access control policies today are specified "identity-centric", a term I use for policies that are specified with the accessor (= the calling identity) in mind. This is the "who?", "who is accessing?", "who can do what?" etc. in the question that the policy answers.  One of the reasons for doing that is because the information security industry has advocated identity-based access control (IBAC) for two decades or longer by now, involving identiy management (IdM), identity & access management (IAM), federated identity management, single sign-on, role-based access control (RBAC) etc. After such a long time of advocacy and education, the "collective mindset" of the information security industry has been shaped by the solution, rather than by the problem it is trying to solve. Now that we have deployed the "hammer" (IAM/RBAC), everything looks like a "nail" (identity-centric), and the industry keeps on "bolting" access policies onto identity systems. Unfortunately, most information security professionals will agree with the fact that identity-centric access control is not as easy to deploy and manage at scale as it seems...

Turns out that it is often easier to start stating your access policies with the protected resources in mind ("Resource-based access control", ResBAC), and then "bolt" the attributes that are relevant for access to the protected resource onto that resource. The questions now change to "how is the resource to be protected?", "what is allowed?", "what are the conditions for access to the resource?" etc. This shift in thinking about and structuring the policy can simplify things because (1) the required protection is often more clear and logical, and thus easier to specify this way, and (2) the access policy usually gets decided and enforced at the protected resource, so it is easier to distribute resource-based access policies to the place of decisioning/enforcement. Note that identities still play a critical role, but in the structure of the policy they are one of potentially many attributes (in the context of attribute based access control, ABAC) that determine access to the protected resource. 

Model-driven security (MDS) is an ideal tool for implementing resource-based access control (ResBAC) efficiently and effectively. This is because policies can be expressed resource-based in policy models, which are then automatically turned into the matching technical access policy rules for a particular system (e.g. SOA/cloud mash-up), and distributed to the protected resources for run-time enforcement. 

Thursday, 22 March 2012

Study estimates 59% accreditation cost saving using automated "Correct by Construction (CxC)" tools (& more for agile SOA/Cloud)

I recently found an interesting technical article by the U.S. National Security Agency's (NSA) Tim Kremann  in "The Next Wave, "The National Security Agency's Review of Emerging Technologies, Vol 19 No 1, 2011", "High Confidence Software and Systems". The article titled "Correct by Construction: Advanced  Software Engineering" (p. 22ff) argues that a correct-by-construction (CxC) methodology, such as model-driven software engineering, can improve assurance and reduce the time and cost to certify assurance (incl. security and correctness). In particular, the article quotes a study by Kestrel Institute, which showed the the automatic generation of certification documents (incl. Common Criteria supporting evidence) as part of a CxC methodology can be estimated to result in "an average overall cost reduction of about 59 percent per certification application due to using CxC methods". Overall, this lead the Kestrel researchers to "conclude that a CxC process will produce a certified product for roughly 30-40 percent of the cost of a conventional process".

These findings are in line with our numerous real-world experiences of using model-driven security approaches to automate both technical security policy implementation (MDS) and accreditation evidence generation (MDSA) (alongside model-driven service orchestration or model-driven development). However, our own empirical research has shown that the cost reduction is much higher than 59% (compared to manual approaches) for agile IT landscapes, especially Service Oriented Architectures (SOAs) "system of systems" and the very related Cloud PaaS mash-up "system of systems". The following argument makes the accreditation/re-accreditation cost reduction evident:
(1) We can save 59% percent during the first-time accreditation as Kestrel researchers mentioned above identified, using an automated model-driven approach (MDS/MDSA);
(2) The manual re-accreditation cost after agile SOA orchestration/Cloud mash-up changes can be significant, because changes across the entire "system of systems" need to be analyzed in order to figure out what impacts re-accreditation and how;
(3) Doing this analysis and change evidence generation automatically based on all the application/interaction/system models, security models, and accreditation requirements models reduces that cost dramatically: it can be close to zero if the automated analysis finds out that the changes do not impact the current accreditation; If manual re-accreditation is necessary, the cost is dramatically reduced because the supporting evidence and a summary of changes and their impacts are automatically produced.

Please contact us if you have any questions about model-driven security or about this blog post.

Friday, 20 January 2012

Analysis Series: "Security Recommendations for Cloud Computing Providers" (German Federal Office for Information Security)

In this post I would like to share my views of the "Authorisation" section (p. 37 in the English version) of the German Bundesamt für Sicherheit in der Informationstechnik (BSI) (Federal Office for Information Security) white paper "Security Recommendations for Cloud Computing Providers (Minimum information security requirements)"(download, German & English). The section states:

"Authorisation
The rights management system must ensure that each role may only see the data (including meta-data) required to achieve the task. The access control should be role-based and the roles and authorisations set up should be reviewed regularly. In general, the least privilege model should be used, with users and CSP administrators only possessing the rights that they require to achieve their tasks. Particular attention should be directed here towards privileged users. If the role is that of a CSP administrator, it should be possible to demonstrate that the only data viewed was that which was required for the task. The rights management system should also be capable of fully documenting and monitoring data exports and imports from and to the CSP. Lastly, any particularly critical administration activities, such as installing patches, should only be performed on the four-eye principle
."

As with most other guidance documents previously analyzed on this blog post series, this section makes strong requirements statements at a high level ("only see the data...required to achieve the task", "least privilege"), but at the same time recommends only less-than-optional technical controls ("access control should be role-based"). As previously identified with other guidance documents, the recommendations fail to address the fact that access control needs to be highly contextual to achieve effective "least privilege" for a particular task. Granting role-based access to some particular job function based on everything that job function might ever potentially need to access for all tasks that job function might ever do is not effective "least privilege!

In order to minimize access rights to exactly what is needed to be accessed ("least privilege") in a particular situation ("task"), the context of the task, as well as other environmental context (such as time of day, crisis level, a particular patient checked into hospital of the treating doctor who wants to access that patient's health record etc).

The fact that this (and other) government issued guidance does not address the issue that traditional access controls (incl. role-based access control) are only partly effective to achieve "least privilege" is unfortunate. It allows enterprises to continue to get away with not really solving the real underlying security challenges they are facing, with customers having to pay the price for the damage caused by these only partly effective security measures.

However, real solutions are available today: Model-driven security policy automation (e.g. OpenPMF) together fine-grained, contextual authorization management (e.g. XACML) help implement real contextual, fine-grained access controls in a manageable way. Model-driven security alleviates the main challenge of authorization management, which is that policies for fine-grained, contextual authorization management are hard to manage and maintain, even for dynamically changing (agile) IT landscapes such as Service Oriented Architectures (SOAs) and Cloud mash-ups. Please feel free to read this blog, our website, or contact me.

Thursday, 15 December 2011

Friday, 9 December 2011

Gartner identifies model-driven security as part of "Top 10 Strategic Technologies for 2012"

Industry analyst firm Gartner identifies model-driven security as part of "Top 10 Strategic Technologies for 2012", in the context of "Contextual and Social User Experience". They write "A contextually aware system anticipates the user’s needs and proactively serves up the most appropriate and customized content, product or service. Context can be used to link mobile, social, location, payment and commerce. It can help build skills in augmented reality, model-driven security and ensemble applications." (source: Gartner Identifies the Top 10 Strategic Technologies for 2012). It is great to see that industry analysts continue to acknowledge the significant potential of model-driven security to automate the technical implementation of contextual, rich, and expressive security policies. Read more about model-driven security on this blog, or contact us for more information.

Monday, 1 August 2011

Analysis Series: NISTIR 7628 Smart Grid Security Recommendations


In this “analysis series” blog post, I will focus on US NIST’s 537-page "Guidelines for SmartGrid Cyber Security" (NIST IR 7628). Here are some interesting recommended controls I have analyzed:
  1. Least privilege access control: The recommended control “Least Privilege” (NIST IR 7628 - SG.AC-7) requires that “the organization assigns the most restrictive set of rights and privileges or access needed by users for the performance of specified tasks”, and that “the organization configures the smart grid information system to enforce the most restrictive set of rights and privileges or access needed by users”. In other words, a caller should only be granted access to a resource if that caller has a need to do so in the specific context, for example a particular step in a business process, or a particular system situation such as emergency level.
  2. Information flow enforcement: The recommended control “Information Flow Enforcement” (NIST IR 7628 - SG.AC-5) requires that the smart grid information system enforces assigned authorizations for controlling the flow of information within the smart grid information system and between interconnected smart grid information systems in accordance with applicable policy. Information flow control regulates where information is allowed to travel within a smart grid information system and between smart grid information systems. As example implementations, the document mentions boundary protection devices that restrict smart grid information system services or provide a packet-filtering capability. This section of the document also offers a number of supplemental considerations. Particularly interesting for the discussion in this paper, the guidance recommends “dynamic information flow control allowing or disallowing information flows based on changing conditions or operational considerations”.
  3. Incident monitoring, incident reporting, and auditing: Related to achieving visibility, numerous recommendations for incident monitoring, incident reporting, and auditing are spread throughout the NIST IR 7628 document. For example:  “smart grid Information System Monitoring Tools and Techniques” (SG.SI-4) requires that “the organization monitors events … to detect attacks, unauthorized activities or conditions, and non-malicious errors” based on the organization’s “monitoring objectives and the capability of the smart grid information system to support such activities”. The supplemental guidance states that this can be achieved through a variety of tools and techniques (e.g., intrusion detection systems, intrusion prevention systems, malicious code protection software, log monitoring software, network monitoring software, and network forensic analysis tools), and can include real-time alerting. “Incident Monitoring” (SG.IR-6) requires that “the organization tracks and documents … security incidents”, maybe using “automated mechanisms to assist in the tracking of security incidents and in the collection and analysis of incident information”. “Incident Reporting” (SG.IR-7) requires incident reporting procedures about what is an incident, granularity of incident information, who receives it etc., again potentially employing “automated mechanisms to assist in the reporting of security incidents”. “Auditable Events” (SG.AU-2): to identify events that need to be auditable as significant and relevant, requires the development and review of a list of auditable events on an organization-defined frequency, including execution of privileged functions. “Audit Monitoring, Analysis, and Reporting” (SG.AU-6) requires audit record reviews and analyses to find and report inappropriate or unusual activity, potentially employing automated, centralized analysis tools. “Audit Reduction and Report Generation” (SG.AU-7) supports near real-time analysis and after-the-fact investigations of security incidents, e.g. by automatically processing audit records for events of interest based on selectable event criteria. “Audit Generation” (SG.AU-15) recommends audit record generation capability, potentially from multiple components into a system-wide audit trail that is time-correlated.
All this makes sense, but is easier to write about than to actually implement, esp. at the scale of a smart grid. Let’s discuss each on turn to see how model-driven security policy automation can help implement these recommendations effectively:
  1. Least privilege access control: What this specifically means is that a dynamic access control “whitelist” (i.e. stating what is allowed, vs. “blacklists” that state what is not allowed) needs to be available that enforces the that policy requirement. Static access control models such as identity-based access control (IBAC) or role-based access control (RBAC) are not sufficient access mechanisms because they do not capture such context in the policy. As a result, virtually all IBAC/RBAC implementations, including traditional Identity and Access Management (IAM) technologies, are insufficient on their own. Attribute-based access control (ABAC), as for example standardized in XACML, help add this missing context and other additional expressions to the policy. The flipside of ABAC is that those fine-grained contextual authorization policies are extremely difficult, time-consuming, and error-prone for human administrators to manually author and maintain. Model-driven security policy automation as implemented in OpenPMF can solve the unmanageability problem of ABAC and ZBAC.
  2. Information flow enforcement: As already mentioned above, IBAC and RBAC are insufficient on their own, and due to the inherent changing (“agile”) nature of today’s interconnected IT landscapes (“system of systems”), ABAC policies would need to be constantly manually updated to be correct after “system of systems” changes, resulting in a policy management nightmare. There are a number of other problems with ABAC, e.g. challenges around authorization delegation across service chains and impersonation, which can be solved using authorization-based access control (ZBAC), which uses authorization tokens and federated authorization token servers. Model-driven security policy automation as implemented in OpenPMF can solve the unmanageability problem of ABAC and ZBAC.
  3. Incident monitoring, incident reporting, and auditing: In the context of the fine-grained contextual authorization mentioned earlier, incident monitoring, reporting, and audit are intrinsically intertwined with authorization. Monitoring, reporting, and audit tools will need to know the specific authorization policies in order to decide whether behaviour is in fact suspicious or not. This differs dramatically from traditional monitoring approaches which mainly monitor for generic vulnerabilities (i.e. the same vulnerabilities occur for a particular technology, rather than for a particular business) and thus do not need to know any specifics about the organization’s business processes in order to flag an incident. I call control and visibility for generic vulnerabilities “security hygiene” to distinguish them from organization-specific policy enforcement and monitoring. Model-driven security incident monitoring and analysis, as implemented in OpenPMF, can solve the policy-driven monitoring challenge for authorization management compliance.
I hope you enjoyed this analysis, comments are of course always appreciated.

Friday, 29 July 2011

Analysis Series: HIPAA Security Rule & Privacy Rule and “minimum necessary” access

Today I would like to discuss what the “minimum necessary” access control in the Health Insurance Portability and Accountability Act (HIPAA) of 1996 really means and how such least privilege technical access control can be effectively implemented. The US government's HIPAA website explains:
  • The “HIPAA Privacy Rule” establishes regulations for the use and disclosure of Protected Health Information (PHI),in particular it requests the implementation of least privilege: “A central aspect of the Privacy Rule is the principle of “minimum necessary” use and disclosure”. A covered entity must make reasonable efforts to use, disclose, and request only the minimum amount of protected health information needed to accomplish the intended purpose of the use, disclosure, or request. A covered entity must develop and implement policies and procedures to reasonably limit uses and disclosures to the minimum necessary, i.e. a covered entity may not use, disclose, or request the entire medical record for a particular purpose, unless it can specifically justify the whole record as the amount reasonably needed for the purpose.“
  • The “HIPAA Security Rule” also limits uses and disclosures of PHI to the "minimum necessary," the Security Rule’s administrative safeguards section requires a covered entity to implement and periodically assess policies and procedures for authorizing access to e-PHI only when such access is appropriate. Interestingly this administrative (i.e. non-technical) section specifically states that this should be implemented “based on the user or recipient's role (role-based access)”. The technical safeguards section mandates access control “A covered entity must implement technical policies and procedures that allow only authorized persons to access electronic protected health information (e-PHI), and must “record and examine access and other activity in information systems that contain or use e-PHI.”
To technically implement least privilege access based on the “minimum necessary” for the particular “use, disclosure, or request”, technical access control must be fine-grained and contextual (e.g. based on the context of the access, the business process the requester or the patient is in, the way information is aggregated across interconnected IT systems etc.). Role-based access control (which is mentioned in the administrative section, not the technical section!) is an insufficient technical mechanism because it is not contextual enough to only grant access when needed for the particular use.
Instead, fine-grained, contextual authorization management (AM) is needed to enforce such policies. The challenge with AM is that policies are hard to author and maintain - there are simply too many technical rules, and maintaining those is too time-consuming, expensive, difficult, and error-prone. Also these technical rules will often not directly match with the human thinking about business security policies.
To solve that policy maintenance show-stopper, model-driven security (MDS) policy automation is also needed, which automatically generates technical security rules from generic security policy requirements (models) that capture, for example, HIPAA security & privacy requirements. MDS takes these models, analyzes information sources such as business processes, applications and interactions, user information and other sources, and automatically generates the technical policy rules enforced by the AM. Most importantly, MDS can automatically update the rules when users, business processes, and applications change.
Model-driven security (MDS) policy automation with fine-grained authorization management (AM) are a critical unique combination to make this happen. The award-winning ObjectSecurity OpenPMF   is the only MDS + AM product in the market. It is adopted by organizations with the most stringent security requirements, including US Navy. We are currently completing a study and a scientific publication where a number of regulations have been analyzed in a similar fashion. Please contact us if you would like further information or if you have any questions/comments.
In conclusion - better adopt effective technical mechanisms to implement the requirements effectively. Just because "best" practices for HIPAA currently do not implement “minimum necessary” effectively does not mean that your organization will get away with it when things go wrong!

Friday, 15 July 2011

Analysis Series: PCI DSS - what it says & what it means

I am delighted to announce a new "Analysis Series" on this blog: Over the next couple of months I will publish numerous insights from a recent gap analysis of security standards and guidance documents. The gap analysis is currently being carried out as part of ObjectSecurity's cloud security gap analysis project.

Today I would like to share my view of what Payment Card Industry (PCI) Data Security Standard (DSS) version 2.0 has to say about access control and technical policy implementation. It says that "restricting access is crucial!", and the main point is covered here

Requirement 7: Restrict access to cardholder data by business need to know
To ensure critical data can only be accessed by authorized personnel, systems and processes must be in place to limit access based on need to know and according to job responsibilities. Need to know is when access rights are granted to only the least amount of data and privileges needed to perform a job.
7.1 Limit access to system components and cardholder data to only those individuals whose job
requires such access.
7.2 Establish an access control system for systems components with multiple users that restricts access based on a user’s need to know, and is set to “deny all” unless specifically allowed.


This clearly states that access policies need to be contextual by the job (not job role!) - so for example, if someone ("Alice") needs access to some customer's ("Bob") payment information for the job of charging Bob, the technical access policy implementation needs to make sure that Alice is actually involved in a sales transaction related to Bob, and that Alice is at the "charge the customer" step in the sales business process. This is called "context". It is important to understand that Alice should not have blanket access to all customer's payment data because she might potentially have a transaction with any customer when they call and buy something. In that case, "need to know" would not be fully implemented.

This example makes clear that role-based access control (RBAC) and user account management are not suffient technical mechanisms to implement PCI-DSS. Instead, fine-grained, contextual authorization management (AM) is needed to enforce such complex policies. The challenge with AM is that policies are hard to author and maintain - there are simply too many technical rules, and maintaining those is too time-consuming, expensive, difficult, and error-prone. Also these technical rules will often not directly match with the human thinking about business security policies.

To solve that policy maintenance show-stopper, model-driven security (MDS) policy automation is also needed, which automatically generates technical security rules from generic security policy requirements (models) - for example captured in models close to the understanding of PCI-DSS Requirement 7. MDS takes these models, analyzes information sources such as business processes, applications and interactions, user information and other sources, and automatically generates the technical policy rules enforced by the AM. Most importantly, MDS can automatically update the rules when users, business processes, and applications change.

In conclusion - start solving the real challenges instead of "something else". Don't wait until CISO means "Career is suddenly over". Better adopt effective technical mechanisms to implement the requirements. Just because "best" practices for PCI-DSS do not implement PCI-DSS correctly does not mean that your organization will get away with it when things go wrong.

Model-driven security (MDS) policy automation with fine-grained authorization management (AM) are a critical unique combination to make this happen. The award-winning ObjectSecurity OpenPMF is the only MDS + AM product in the market. It is adopted by organizations with the most stringent security requirements, including US Navy. Please contact us if you would like further information or if you have any questions/comments.

Monday, 27 June 2011

XACML is a machine policy format, not a policy authoring language for humans

We have said it since 2003, and finally we are not alone anymore. Increasingly, industry trade bodies and analysts agree with us that XACML is not a viable policy authoring language for humans. For example, OASIS had a recent webinar where Domain Specific  Languages (DSLs) were mentioned in the "Future directions" part of the presentation. Great, because model-driven security policy automation is inherently based on DSLs, and ObjectSecurity's OpenPMF has full standards-based (Eclipse EMF) support for DSLs. Recently, an analyst who covers the authorization management space wrote that XACML is only good if it is hidden from humans.
Because this is great and shows that the industry is moving towards finally accepting that policy automation as a necessary mechanism to make authorization management work, I would like to explain in a bit more detail that it is not only about "hiding" XACML, but also about automatically generating technical details from generic DSLs. DSLs should express policies in the way human security policy specialists think about policy, which might be different from how the technical enforcement actually makes concrete decisions. ObjectSecurity's award-winning and patent-pending model-driven security policy automation bridges the gap to the actual technical enforcement rules through transformation algorithms that can analyze many information sources (e.g. business processes, application mashups, directory information, sensor information) to automatically generate and update the technical rules.If you want to read up about this, feel free to read our website as an introduction, and get further details here.

Tuesday, 31 May 2011

Government clouds (G-Cloud) - Security through Obscurity?

We are currently carrying out an R&D project about applying policy automation and ObjectSecurity OpenPMF to cloud. Interestingly, government cloud initiatives worldwide seem to keep their information assurance (IA) architectures confidential (maybe even classified?). For example (just to name one), the UK Cabinet office published a number of G-Cloud documents but deliberately did not publish the Information Assurance document. I have been in the security field for way too long (over 15 years) and have heard and seen evidence over and over again that security through obscurity's disadvantages outweigh the benefits. And I am apparently not the only (e.g. concerns voiced here) one who thinks that related to G-Cloud. G-Clouds are large, interconnected IT landscapes that rely on standards and frameworks. How is this ecosystem ever supposed to come together if it is hidden under a cloak of obscurity? And how is the required innovation supposed to come in if the cloak of obscurity prevents innovators to apply their solutions to G-Cloud? I believe that general government cloud architectures should be publicized so that the expert community can provide suggestions. It is also a good way to achieve some transparency about procurements and push for standards.I would be grateful if G-Cloud initiatives could provide me with information about their IA architectures so I could explain why and how model-driven security policy automation and compliance automation should be integrated.

Wednesday, 27 April 2011

Cyber security paradigm shift needed: Focus on solving your customers' problems instead of “something else”!


Over the last decade, a lot of money has been spent on cyber security, while cyber security has become less effective in terms of preventing security breaches and the related damage. On the one hand, this is partly because of the increasing sophistication of attackers. But on the other hand, it is partly also because the cyber security industry fails to adequately address the really significant security problems, and instead selling “something else” that is easier to solve but does not solve the problems. While a defense-in-depth strategy is desirable, our industry needs to stop shying away from solving the big problems (incl. general lack of enforcement mechanisms and procedures, preventing insider theft, preventing data leakage, no mechanisms to implement regulatory compliance reliably for applications, no mechanisms to enforce least privilege / need to know policies).
One thing I hear repeatedly is that customers do not ask for solutions to their problems, but instead ask for a more or  less ineffective "quick fix". I do not believe this is really true - customers are often unaware of how to pose the right questions to their vendors, or pose them very indirectly because their understanding of security is shaped by vendor offerings/marketing/misinformation. Discuss top down ("what are you trying to achieve?") instead of  bottom up ("what product feature xy would you like to adopt?"). Here are some concrete questions to ask your customers:
 
1) HOW TO BE PREVENTIVE / PROACTIVE & REACTIVE VS JUST REACTIVE?
- How are enterprise security policies and regulatory compliance in general proactively enforced (=blocking based on whitelists), as opposed to just reactively monitored?
- How are enterprise security policies concretely implemented (enforced & monitored) in the software?
- How do you demonstrate that the implemented technical security actually matches with the intended enterprise security policies?
- How is automation used to achieve all this?

2) HOW TO PREVENT INSIDER BREACHES, LEAST PRIVILEGE?
- How are malicious or negligent insiders (or compromised accounts) prevented from committing massive data breaches?
- How are contextual policies, such as "least privilege" policies enforced, e.g. for HIPAA and PCI?
- How is automation used to achieve all this?

3) HOW TO MAINTAIN SECURITY IN THE FACE OF CHANGE? HOW TO AUTOMATE?
- What happens when the interconnected application landscape changes (e.g. SOA & cloud agility)?
- How is security made part of the software development lifecycle (SDLC) without burdening developers?
- How are the technical policies updated to match with the enterprise security policies and the changed environment in a fast, reliable, and cheap fashion?
- And how is compliance reliably demonstrated after updates?
- How is automation used to achieve all this?

4) CUSTOMER PAIN POINTS:
- Even if customers have not raised those points as described above, they will probably have implicitly asked for solutions to those problems. For example:
- If customers say "the deployment needs to comply with regulation xy", and the regulation states things like "data should only be used for the purpose", then you need to enforce least privilege (example: HIPAA). The same applies if customers ask for solutions to prevent insider breaches.
- If customers ask for preventing breaches, they will need real proactive policy enforcement (=blocking based on whitelists), and not just monitoring.
- If customers say "our IT landscape needs to be agile", or " future-proof", then they will need to have policy automation. Otherwise the manual policy implementation will effectively prevent IT agility (too many manual updates)

Comments on this are greatly appreciated as usual.