Model Driven Security Policy Automation
On this blog, ObjectSecurity co-founder and CEO Ulrich Lang discusses security policy automation and model-driven security. The aim of this blog is to advocate advance the state of the art in this area through exchange of ideas. www.modeldrivensecurity.org - www.policyautomation.org - www.objectsecurity.com
Thursday, 15 December 2011
Model-driven security is now on Wikipedia
We are pleased to report that model-driven security is now covered on the encyclopedia website Wikipedia. Click here to read the Wikipedia article.
Friday, 9 December 2011
Gartner identifies model-driven security as part of "Top 10 Strategic Technologies for 2012"
Industry analyst firm Gartner identifies model-driven security as part of "Top 10 Strategic Technologies for 2012", in the context of "Contextual and Social User Experience". They write "A contextually aware system anticipates the user’s needs and proactively serves up the most appropriate and customized content, product or service. Context can be used to link mobile, social, location, payment and commerce. It can help build skills in augmented reality, model-driven security and ensemble applications." (source: Gartner Identifies the Top 10 Strategic Technologies for 2012). It is great to see that industry analysts continue to acknowledge the significant potential of model-driven security to automate the technical implementation of contextual, rich, and expressive security policies. Read more about model-driven security on this blog, or contact us for more information.
Monday, 1 August 2011
Analysis Series: NISTIR 7628 Smart Grid Security Recommendations
In this “analysis series” blog post, I will focus on US NIST’s 537-page "Guidelines for SmartGrid Cyber Security" (NIST IR 7628). Here are some interesting recommended controls I have analyzed:
- Least privilege access control: The recommended control “Least Privilege” (NIST IR 7628 - SG.AC-7) requires that “the organization assigns the most restrictive set of rights and privileges or access needed by users for the performance of specified tasks”, and that “the organization configures the smart grid information system to enforce the most restrictive set of rights and privileges or access needed by users”. In other words, a caller should only be granted access to a resource if that caller has a need to do so in the specific context, for example a particular step in a business process, or a particular system situation such as emergency level.
- Information flow enforcement: The recommended control “Information Flow Enforcement” (NIST IR 7628 - SG.AC-5) requires that the smart grid information system enforces assigned authorizations for controlling the flow of information within the smart grid information system and between interconnected smart grid information systems in accordance with applicable policy. Information flow control regulates where information is allowed to travel within a smart grid information system and between smart grid information systems. As example implementations, the document mentions boundary protection devices that restrict smart grid information system services or provide a packet-filtering capability. This section of the document also offers a number of supplemental considerations. Particularly interesting for the discussion in this paper, the guidance recommends “dynamic information flow control allowing or disallowing information flows based on changing conditions or operational considerations”.
- Incident monitoring, incident reporting, and auditing: Related to achieving visibility, numerous recommendations for incident monitoring, incident reporting, and auditing are spread throughout the NIST IR 7628 document. For example: “smart grid Information System Monitoring Tools and Techniques” (SG.SI-4) requires that “the organization monitors events … to detect attacks, unauthorized activities or conditions, and non-malicious errors” based on the organization’s “monitoring objectives and the capability of the smart grid information system to support such activities”. The supplemental guidance states that this can be achieved through a variety of tools and techniques (e.g., intrusion detection systems, intrusion prevention systems, malicious code protection software, log monitoring software, network monitoring software, and network forensic analysis tools), and can include real-time alerting. “Incident Monitoring” (SG.IR-6) requires that “the organization tracks and documents … security incidents”, maybe using “automated mechanisms to assist in the tracking of security incidents and in the collection and analysis of incident information”. “Incident Reporting” (SG.IR-7) requires incident reporting procedures about what is an incident, granularity of incident information, who receives it etc., again potentially employing “automated mechanisms to assist in the reporting of security incidents”. “Auditable Events” (SG.AU-2): to identify events that need to be auditable as significant and relevant, requires the development and review of a list of auditable events on an organization-defined frequency, including execution of privileged functions. “Audit Monitoring, Analysis, and Reporting” (SG.AU-6) requires audit record reviews and analyses to find and report inappropriate or unusual activity, potentially employing automated, centralized analysis tools. “Audit Reduction and Report Generation” (SG.AU-7) supports near real-time analysis and after-the-fact investigations of security incidents, e.g. by automatically processing audit records for events of interest based on selectable event criteria. “Audit Generation” (SG.AU-15) recommends audit record generation capability, potentially from multiple components into a system-wide audit trail that is time-correlated.
All this makes sense, but is easier to write about than to actually implement, esp. at the scale of a smart grid. Let’s discuss each on turn to see how model-driven security policy automation can help implement these recommendations effectively:
- Least privilege access control: What this specifically means is that a dynamic access control “whitelist” (i.e. stating what is allowed, vs. “blacklists” that state what is not allowed) needs to be available that enforces the that policy requirement. Static access control models such as identity-based access control (IBAC) or role-based access control (RBAC) are not sufficient access mechanisms because they do not capture such context in the policy. As a result, virtually all IBAC/RBAC implementations, including traditional Identity and Access Management (IAM) technologies, are insufficient on their own. Attribute-based access control (ABAC), as for example standardized in XACML, help add this missing context and other additional expressions to the policy. The flipside of ABAC is that those fine-grained contextual authorization policies are extremely difficult, time-consuming, and error-prone for human administrators to manually author and maintain. Model-driven security policy automation as implemented in OpenPMF can solve the unmanageability problem of ABAC and ZBAC.
- Information flow enforcement: As already mentioned above, IBAC and RBAC are insufficient on their own, and due to the inherent changing (“agile”) nature of today’s interconnected IT landscapes (“system of systems”), ABAC policies would need to be constantly manually updated to be correct after “system of systems” changes, resulting in a policy management nightmare. There are a number of other problems with ABAC, e.g. challenges around authorization delegation across service chains and impersonation, which can be solved using authorization-based access control (ZBAC), which uses authorization tokens and federated authorization token servers. Model-driven security policy automation as implemented in OpenPMF can solve the unmanageability problem of ABAC and ZBAC.
- Incident monitoring, incident reporting, and auditing: In the context of the fine-grained contextual authorization mentioned earlier, incident monitoring, reporting, and audit are intrinsically intertwined with authorization. Monitoring, reporting, and audit tools will need to know the specific authorization policies in order to decide whether behaviour is in fact suspicious or not. This differs dramatically from traditional monitoring approaches which mainly monitor for generic vulnerabilities (i.e. the same vulnerabilities occur for a particular technology, rather than for a particular business) and thus do not need to know any specifics about the organization’s business processes in order to flag an incident. I call control and visibility for generic vulnerabilities “security hygiene” to distinguish them from organization-specific policy enforcement and monitoring. Model-driven security incident monitoring and analysis, as implemented in OpenPMF, can solve the policy-driven monitoring challenge for authorization management compliance.
I hope you enjoyed this analysis, comments are of course always appreciated.
Friday, 29 July 2011
Analysis Series: HIPAA Security Rule & Privacy Rule and “minimum necessary” access
Today I would like to discuss what the “minimum necessary” access control in the Health Insurance Portability and Accountability Act (HIPAA) of 1996 really means and how such least privilege technical access control can be effectively implemented. The US government's HIPAA website explains:
Instead, fine-grained, contextual authorization management (AM) is needed to enforce such policies. The challenge with AM is that policies are hard to author and maintain - there are simply too many technical rules, and maintaining those is too time-consuming, expensive, difficult, and error-prone. Also these technical rules will often not directly match with the human thinking about business security policies.
To solve that policy maintenance show-stopper, model-driven security (MDS) policy automation is also needed, which automatically generates technical security rules from generic security policy requirements (models) that capture, for example, HIPAA security & privacy requirements. MDS takes these models, analyzes information sources such as business processes, applications and interactions, user information and other sources, and automatically generates the technical policy rules enforced by the AM. Most importantly, MDS can automatically update the rules when users, business processes, and applications change.
Model-driven security (MDS) policy automation with fine-grained authorization management (AM) are a critical unique combination to make this happen. The award-winning ObjectSecurity OpenPMF is the only MDS + AM product in the market. It is adopted by organizations with the most stringent security requirements, including US Navy. We are currently completing a study and a scientific publication where a number of regulations have been analyzed in a similar fashion. Please contact us if you would like further information or if you have any questions/comments.
In conclusion - better adopt effective technical mechanisms to implement the requirements effectively. Just because "best" practices for HIPAA currently do not implement “minimum necessary” effectively does not mean that your organization will get away with it when things go wrong!
- The “HIPAA Privacy Rule” establishes regulations for the use and disclosure of Protected Health Information (PHI),in particular it requests the implementation of least privilege: “A central aspect of the Privacy Rule is the principle of “minimum necessary” use and disclosure”. A covered entity must make reasonable efforts to use, disclose, and request only the minimum amount of protected health information needed to accomplish the intended purpose of the use, disclosure, or request. A covered entity must develop and implement policies and procedures to reasonably limit uses and disclosures to the minimum necessary, i.e. a covered entity may not use, disclose, or request the entire medical record for a particular purpose, unless it can specifically justify the whole record as the amount reasonably needed for the purpose.“
- The “HIPAA Security Rule” also limits uses and disclosures of PHI to the "minimum necessary," the Security Rule’s administrative safeguards section requires a covered entity to implement and periodically assess policies and procedures for authorizing access to e-PHI only when such access is appropriate. Interestingly this administrative (i.e. non-technical) section specifically states that this should be implemented “based on the user or recipient's role (role-based access)”. The technical safeguards section mandates access control “A covered entity must implement technical policies and procedures that allow only authorized persons to access electronic protected health information (e-PHI), and must “record and examine access and other activity in information systems that contain or use e-PHI.”
Instead, fine-grained, contextual authorization management (AM) is needed to enforce such policies. The challenge with AM is that policies are hard to author and maintain - there are simply too many technical rules, and maintaining those is too time-consuming, expensive, difficult, and error-prone. Also these technical rules will often not directly match with the human thinking about business security policies.
To solve that policy maintenance show-stopper, model-driven security (MDS) policy automation is also needed, which automatically generates technical security rules from generic security policy requirements (models) that capture, for example, HIPAA security & privacy requirements. MDS takes these models, analyzes information sources such as business processes, applications and interactions, user information and other sources, and automatically generates the technical policy rules enforced by the AM. Most importantly, MDS can automatically update the rules when users, business processes, and applications change.
Model-driven security (MDS) policy automation with fine-grained authorization management (AM) are a critical unique combination to make this happen. The award-winning ObjectSecurity OpenPMF is the only MDS + AM product in the market. It is adopted by organizations with the most stringent security requirements, including US Navy. We are currently completing a study and a scientific publication where a number of regulations have been analyzed in a similar fashion. Please contact us if you would like further information or if you have any questions/comments.
In conclusion - better adopt effective technical mechanisms to implement the requirements effectively. Just because "best" practices for HIPAA currently do not implement “minimum necessary” effectively does not mean that your organization will get away with it when things go wrong!
Friday, 15 July 2011
Analysis Series: PCI DSS - what it says & what it means
I am delighted to announce a new "Analysis Series" on this blog: Over the next couple of months I will publish numerous insights from a recent gap analysis of security standards and guidance documents. The gap analysis is currently being carried out as part of ObjectSecurity's cloud security gap analysis project.
Today I would like to share my view of what Payment Card Industry (PCI) Data Security Standard (DSS) version 2.0 has to say about access control and technical policy implementation. It says that "restricting access is crucial!", and the main point is covered here
Requirement 7: Restrict access to cardholder data by business need to know
To ensure critical data can only be accessed by authorized personnel, systems and processes must be in place to limit access based on need to know and according to job responsibilities. Need to know is when access rights are granted to only the least amount of data and privileges needed to perform a job.
7.1 Limit access to system components and cardholder data to only those individuals whose job
requires such access.
7.2 Establish an access control system for systems components with multiple users that restricts access based on a user’s need to know, and is set to “deny all” unless specifically allowed.
This clearly states that access policies need to be contextual by the job (not job role!) - so for example, if someone ("Alice") needs access to some customer's ("Bob") payment information for the job of charging Bob, the technical access policy implementation needs to make sure that Alice is actually involved in a sales transaction related to Bob, and that Alice is at the "charge the customer" step in the sales business process. This is called "context". It is important to understand that Alice should not have blanket access to all customer's payment data because she might potentially have a transaction with any customer when they call and buy something. In that case, "need to know" would not be fully implemented.
This example makes clear that role-based access control (RBAC) and user account management are not suffient technical mechanisms to implement PCI-DSS. Instead, fine-grained, contextual authorization management (AM) is needed to enforce such complex policies. The challenge with AM is that policies are hard to author and maintain - there are simply too many technical rules, and maintaining those is too time-consuming, expensive, difficult, and error-prone. Also these technical rules will often not directly match with the human thinking about business security policies.
To solve that policy maintenance show-stopper, model-driven security (MDS) policy automation is also needed, which automatically generates technical security rules from generic security policy requirements (models) - for example captured in models close to the understanding of PCI-DSS Requirement 7. MDS takes these models, analyzes information sources such as business processes, applications and interactions, user information and other sources, and automatically generates the technical policy rules enforced by the AM. Most importantly, MDS can automatically update the rules when users, business processes, and applications change.
In conclusion - start solving the real challenges instead of "something else". Don't wait until CISO means "Career is suddenly over". Better adopt effective technical mechanisms to implement the requirements. Just because "best" practices for PCI-DSS do not implement PCI-DSS correctly does not mean that your organization will get away with it when things go wrong.
Model-driven security (MDS) policy automation with fine-grained authorization management (AM) are a critical unique combination to make this happen. The award-winning ObjectSecurity OpenPMF is the only MDS + AM product in the market. It is adopted by organizations with the most stringent security requirements, including US Navy. Please contact us if you would like further information or if you have any questions/comments.
Today I would like to share my view of what Payment Card Industry (PCI) Data Security Standard (DSS) version 2.0 has to say about access control and technical policy implementation. It says that "restricting access is crucial!", and the main point is covered here
Requirement 7: Restrict access to cardholder data by business need to know
To ensure critical data can only be accessed by authorized personnel, systems and processes must be in place to limit access based on need to know and according to job responsibilities. Need to know is when access rights are granted to only the least amount of data and privileges needed to perform a job.
7.1 Limit access to system components and cardholder data to only those individuals whose job
requires such access.
7.2 Establish an access control system for systems components with multiple users that restricts access based on a user’s need to know, and is set to “deny all” unless specifically allowed.
This clearly states that access policies need to be contextual by the job (not job role!) - so for example, if someone ("Alice") needs access to some customer's ("Bob") payment information for the job of charging Bob, the technical access policy implementation needs to make sure that Alice is actually involved in a sales transaction related to Bob, and that Alice is at the "charge the customer" step in the sales business process. This is called "context". It is important to understand that Alice should not have blanket access to all customer's payment data because she might potentially have a transaction with any customer when they call and buy something. In that case, "need to know" would not be fully implemented.
This example makes clear that role-based access control (RBAC) and user account management are not suffient technical mechanisms to implement PCI-DSS. Instead, fine-grained, contextual authorization management (AM) is needed to enforce such complex policies. The challenge with AM is that policies are hard to author and maintain - there are simply too many technical rules, and maintaining those is too time-consuming, expensive, difficult, and error-prone. Also these technical rules will often not directly match with the human thinking about business security policies.
To solve that policy maintenance show-stopper, model-driven security (MDS) policy automation is also needed, which automatically generates technical security rules from generic security policy requirements (models) - for example captured in models close to the understanding of PCI-DSS Requirement 7. MDS takes these models, analyzes information sources such as business processes, applications and interactions, user information and other sources, and automatically generates the technical policy rules enforced by the AM. Most importantly, MDS can automatically update the rules when users, business processes, and applications change.
In conclusion - start solving the real challenges instead of "something else". Don't wait until CISO means "Career is suddenly over". Better adopt effective technical mechanisms to implement the requirements. Just because "best" practices for PCI-DSS do not implement PCI-DSS correctly does not mean that your organization will get away with it when things go wrong.
Model-driven security (MDS) policy automation with fine-grained authorization management (AM) are a critical unique combination to make this happen. The award-winning ObjectSecurity OpenPMF is the only MDS + AM product in the market. It is adopted by organizations with the most stringent security requirements, including US Navy. Please contact us if you would like further information or if you have any questions/comments.
Monday, 27 June 2011
XACML is a machine policy format, not a policy authoring language for humans
We have said it since 2003, and finally we are not alone anymore. Increasingly, industry trade bodies and analysts agree with us that XACML is not a viable policy authoring language for humans. For example, OASIS had a recent webinar where Domain Specific Languages (DSLs) were mentioned in the "Future directions" part of the presentation. Great, because model-driven security policy automation is inherently based on DSLs, and ObjectSecurity's OpenPMF has full standards-based (Eclipse EMF) support for DSLs. Recently, an analyst who covers the authorization management space wrote that XACML is only good if it is hidden from humans.
Because this is great and shows that the industry is moving towards finally accepting that policy automation as a necessary mechanism to make authorization management work, I would like to explain in a bit more detail that it is not only about "hiding" XACML, but also about automatically generating technical details from generic DSLs. DSLs should express policies in the way human security policy specialists think about policy, which might be different from how the technical enforcement actually makes concrete decisions. ObjectSecurity's award-winning and patent-pending model-driven security policy automation bridges the gap to the actual technical enforcement rules through transformation algorithms that can analyze many information sources (e.g. business processes, application mashups, directory information, sensor information) to automatically generate and update the technical rules.If you want to read up about this, feel free to read our website as an introduction, and get further details here.
Because this is great and shows that the industry is moving towards finally accepting that policy automation as a necessary mechanism to make authorization management work, I would like to explain in a bit more detail that it is not only about "hiding" XACML, but also about automatically generating technical details from generic DSLs. DSLs should express policies in the way human security policy specialists think about policy, which might be different from how the technical enforcement actually makes concrete decisions. ObjectSecurity's award-winning and patent-pending model-driven security policy automation bridges the gap to the actual technical enforcement rules through transformation algorithms that can analyze many information sources (e.g. business processes, application mashups, directory information, sensor information) to automatically generate and update the technical rules.If you want to read up about this, feel free to read our website as an introduction, and get further details here.
Tuesday, 31 May 2011
Government clouds (G-Cloud) - Security through Obscurity?
We are currently carrying out an R&D project about applying policy automation and ObjectSecurity OpenPMF to cloud. Interestingly, government cloud initiatives worldwide seem to keep their information assurance (IA) architectures confidential (maybe even classified?). For example (just to name one), the UK Cabinet office published a number of G-Cloud documents but deliberately did not publish the Information Assurance document. I have been in the security field for way too long (over 15 years) and have heard and seen evidence over and over again that security through obscurity's disadvantages outweigh the benefits. And I am apparently not the only (e.g. concerns voiced here) one who thinks that related to G-Cloud. G-Clouds are large, interconnected IT landscapes that rely on standards and frameworks. How is this ecosystem ever supposed to come together if it is hidden under a cloak of obscurity? And how is the required innovation supposed to come in if the cloak of obscurity prevents innovators to apply their solutions to G-Cloud? I believe that general government cloud architectures should be publicized so that the expert community can provide suggestions. It is also a good way to achieve some transparency about procurements and push for standards.I would be grateful if G-Cloud initiatives could provide me with information about their IA architectures so I could explain why and how model-driven security policy automation and compliance automation should be integrated.
Wednesday, 27 April 2011
Cyber security paradigm shift needed: Focus on solving your customers' problems instead of “something else”!
Over the last decade, a lot of money has been spent on cyber security, while cyber security has become less effective in terms of preventing security breaches and the related damage. On the one hand, this is partly because of the increasing sophistication of attackers. But on the other hand, it is partly also because the cyber security industry fails to adequately address the really significant security problems, and instead selling “something else” that is easier to solve but does not solve the problems. While a defense-in-depth strategy is desirable, our industry needs to stop shying away from solving the big problems (incl. general lack of enforcement mechanisms and procedures, preventing insider theft, preventing data leakage, no mechanisms to implement regulatory compliance reliably for applications, no mechanisms to enforce least privilege / need to know policies).
One thing I hear repeatedly is that customers do not ask for solutions to their problems, but instead ask for a more or less ineffective "quick fix". I do not believe this is really true - customers are often unaware of how to pose the right questions to their vendors, or pose them very indirectly because their understanding of security is shaped by vendor offerings/marketing/misinformation. Discuss top down ("what are you trying to achieve?") instead of bottom up ("what product feature xy would you like to adopt?"). Here are some concrete questions to ask your customers:
1) HOW TO BE PREVENTIVE / PROACTIVE & REACTIVE VS JUST REACTIVE?
- How are enterprise security policies and regulatory compliance in general proactively enforced (=blocking based on whitelists), as opposed to just reactively monitored?
- How are enterprise security policies concretely implemented (enforced & monitored) in the software?
- How do you demonstrate that the implemented technical security actually matches with the intended enterprise security policies?
- How is automation used to achieve all this?
2) HOW TO PREVENT INSIDER BREACHES, LEAST PRIVILEGE?
- How are malicious or negligent insiders (or compromised accounts) prevented from committing massive data breaches?
- How are contextual policies, such as "least privilege" policies enforced, e.g. for HIPAA and PCI?
- How is automation used to achieve all this?
3) HOW TO MAINTAIN SECURITY IN THE FACE OF CHANGE? HOW TO AUTOMATE?
- What happens when the interconnected application landscape changes (e.g. SOA & cloud agility)?
- How is security made part of the software development lifecycle (SDLC) without burdening developers?
- How are the technical policies updated to match with the enterprise security policies and the changed environment in a fast, reliable, and cheap fashion?
- And how is compliance reliably demonstrated after updates?
- How is automation used to achieve all this?
4) CUSTOMER PAIN POINTS:
- Even if customers have not raised those points as described above, they will probably have implicitly asked for solutions to those problems. For example:
- If customers say "the deployment needs to comply with regulation xy", and the regulation states things like "data should only be used for the purpose", then you need to enforce least privilege (example: HIPAA). The same applies if customers ask for solutions to prevent insider breaches.
- If customers ask for preventing breaches, they will need real proactive policy enforcement (=blocking based on whitelists), and not just monitoring.
- If customers say "our IT landscape needs to be agile", or " future-proof", then they will need to have policy automation. Otherwise the manual policy implementation will effectively prevent IT agility (too many manual updates)
Wednesday, 6 April 2011
Give cloud users more control and close the loop: Standards-based policy in, standards-based audit out.
It is becoming increasingly clear to me that we need to give end-users more control over what security and auditing the cloud (especially for the higher layers, i.e. PaaS/SaaS) does for them. Cloud providers simply cannot know the end-user organization's business security & compliance policies, and therefore can only provide basic (but important) security and compliance support. This should happen in two main directions of a closed loop:
1) Policy in: we need to have standardized interfaces and policy formats which cloud providers can support, so that end-users can configure authorization, authentication etc. There are some standards out there, e.g. OASIS XACML, but this may be on a too application specific level. My company has advocated the use of models as a generic format to express policy - these can then be implemented automatically by cloud providers using model-driven security. Request more information here.
2) Audit out: We also need standard formats/APIs etc. to let end-user organizations tell the cloud provider what audit information they require, and when. It looks to me that CloudAudit is doing just that.
Would anyone be interested in joining forces to bring a community together to do what CloudAudit does for Policy? Please contact me or post your interest on this discussion.
Feel free to comment on this blog, or join the discussion on the Cloud Security Alliance LinkedIn group.
Friday, 1 April 2011
Implementing security policy automation: Free lunch? Unfortunately not.
Good security is ultimately about figuring out what should happen, and making sure that anything else does not happen. In security terms, this means figuring out enterprise security and compliance policies first, then figuring out how to implement controls across technology, processes, and people.
Unfortunately this is hard, which is why most security products and methods in the market avoid enforcing your policy altogether. For example, antivirus, anti-malware, etc. are useful "hygiene" tools but do not know enough about your business to even enforce the policies that matter (e.g. PCI, HIPAA, NERC/FERC, Common Criteria...). Other tools (IDSs, compliance monitoring etc.) also do not know the policy that matters and simply monitor something, so some administrator - if they can weed through the overload - may spot that you got hacked, which is better than nothing but does not prevent being hacked. Other tools enforce a policy (e.g. firewalls, identity management), but usually not the policies or the granularity/contextuality that matters to the business. While I am a proponent of "defense in depth", I would sum the current state of most of the security vendor landscape and end-user purchasing behavior as "solving something we can solve", rather than actually solving the real security problems.
However, doing this right by stating what you want and enforcing it is hard: for example, manually producing many complex, context-aware technical policy rules ("whitelisting") for a highly interconnected, large Service Oriented Architecture (SOA) or cloud mash-up is highly error-prone and expensive, and is also totally unmaintainable. There is also little assurance that the configured policy actually capture the intent.
Policy automation tools such as OpenPMF make this easier and more maintainable, especially for agile IT landscapes (incl. SOA/cloud) - it lets security and compliance specialists capture policies at an intuitive level as models (similar to enterprise architecture and business process models), and automatically takes care of generating/enforcing/monitoring the matching technical rules. However, this is no free lunch either - figuring out and capturing the requirements and configuring everything is not easy and takes time. It also will not work elegantly for each and every kind of system. However, when you compare it to the two alternatives (1) solving something but not the problem and (2) incurring a manual administration nightmare, it is a compelling approach.
Unfortunately this is hard, which is why most security products and methods in the market avoid enforcing your policy altogether. For example, antivirus, anti-malware, etc. are useful "hygiene" tools but do not know enough about your business to even enforce the policies that matter (e.g. PCI, HIPAA, NERC/FERC, Common Criteria...). Other tools (IDSs, compliance monitoring etc.) also do not know the policy that matters and simply monitor something, so some administrator - if they can weed through the overload - may spot that you got hacked, which is better than nothing but does not prevent being hacked. Other tools enforce a policy (e.g. firewalls, identity management), but usually not the policies or the granularity/contextuality that matters to the business. While I am a proponent of "defense in depth", I would sum the current state of most of the security vendor landscape and end-user purchasing behavior as "solving something we can solve", rather than actually solving the real security problems.
However, doing this right by stating what you want and enforcing it is hard: for example, manually producing many complex, context-aware technical policy rules ("whitelisting") for a highly interconnected, large Service Oriented Architecture (SOA) or cloud mash-up is highly error-prone and expensive, and is also totally unmaintainable. There is also little assurance that the configured policy actually capture the intent.
Policy automation tools such as OpenPMF make this easier and more maintainable, especially for agile IT landscapes (incl. SOA/cloud) - it lets security and compliance specialists capture policies at an intuitive level as models (similar to enterprise architecture and business process models), and automatically takes care of generating/enforcing/monitoring the matching technical rules. However, this is no free lunch either - figuring out and capturing the requirements and configuring everything is not easy and takes time. It also will not work elegantly for each and every kind of system. However, when you compare it to the two alternatives (1) solving something but not the problem and (2) incurring a manual administration nightmare, it is a compelling approach.
Tuesday, 22 March 2011
"Least privilege", "need to know", insider threats & WikiLeaks
We are happy to see recent increasing understanding in large enterprises and government that policies (security & compliance) need to proactively enforced, and not just monitored. To motivate my point, one of the hype topics at this year's RSA Expo hype was "continuous monitoring", which essentially tells you when you got attacked earlier than normal compliance auditing. This is necessary but not sufficient: Necessary because there is no 100% security protection. Not sufficient because you need to prevent attacks proactively. Such real prevention is difficult to manage because it requires that someone captures the security & compliance requirements in a technical policy "whitelist". However, without a whitelist of allowed actions, "least privilege" and "need to know" cannot be implemented. And it is exactly that least privilege principle that prevents insider attacks and attacks where outsiders hijack insider credentials. It would have potentially prevented the WikiLeaks leak from Navy, because if least privilege had been enforced correctly, access to all the information would have not been granted. Security policy automation and model-driven security help capture requirements and automatic enforcement. Least privilege can for example be elegantly captured by having policies related to the sequence of a workflow of a SOA orchestration: you can only access a particular web service in a particular step of a workflow for which you have been authorized, and only if you have correctly gone through the workflow up to the point where you can access the web service. Again, capturing SOA BPM workflows and security & compliance models is not easy, but easier approaches (e.g. firewalls, malware, code scanning, IDS etc.) are not able to solve the least privilege & need to know problem. Contact us at www.objectsecurity.com if you have any questions/comments
Friday, 21 January 2011
Cloud Security Alliance Presentation on Policy Automation: 8 Feb 2011, noon PST (WebEx & Sunnyvale, CA, USA)
What: Security Policy Automation for Cloud Applications
When: Tuesday, February 8, 2011 12:00 PM
Details & RSVP to this Silicon Valley CSA Meetup:
http://www.meetup.com/SV-CSA/calendar/16049370/
ABSTRACT:
You have to plan ahead in terms of security when moving parts of your organization’s IT into the Cloud. Compromises and mistakes done early on when things are small and less critical will come back and haunt you later. In this session, you will learn why security automation is important to meet both regulatory compliance requirements and the financial rationale behind Cloud adoption. The financial ROI of Cloud security and compliance is judged by decision makers in end-user organizations by the same measures as is done for Cloud computing in general, i.e. by how much it cuts up-front capital expenditure and in-house manual maintenance cost. However, manually translating security policy into technical implementation is difficult, expensive, and error-prone (esp. for the application layer). In order to reduce security related manual maintenance cost at the end-user organization, security tools need to become more automated. This session explains how automated tools can be used to translate security policy into technical security implementation for Cloud applications (using an approach known as “model-driven security”), so that security practitioners can better support financial rationale behind Cloud computing, and also influence Cloud providers to provide better security tools. The session will also cover how this approach helps achieve regulatory compliance for cloud.
SPEAKER:
Dr. Ulrich Lang is the co-founder and CEO of ObjectSecurity®, “The Security Policy Automation Company™”. ObjectSecurity’s OpenPMF™ product makes application security manageable through automation. Ulrich is a renowned thought leader, author and speaker on model-driven security, security policy, Cloud/SOA/middleware/application security, and has over 15 years of experience in information security. He received a PhD from the University of Cambridge Computer Laboratory (Security Group) on conceptual aspects of middleware security in 2003, after having completed a Master's Degree in Information Security with distinction from Royal Holloway College (University of London) in 1997.
When: Tuesday, February 8, 2011 12:00 PM
Details & RSVP to this Silicon Valley CSA Meetup:
http://www.meetup.com/SV-CSA/calendar/16049370/
ABSTRACT:
You have to plan ahead in terms of security when moving parts of your organization’s IT into the Cloud. Compromises and mistakes done early on when things are small and less critical will come back and haunt you later. In this session, you will learn why security automation is important to meet both regulatory compliance requirements and the financial rationale behind Cloud adoption. The financial ROI of Cloud security and compliance is judged by decision makers in end-user organizations by the same measures as is done for Cloud computing in general, i.e. by how much it cuts up-front capital expenditure and in-house manual maintenance cost. However, manually translating security policy into technical implementation is difficult, expensive, and error-prone (esp. for the application layer). In order to reduce security related manual maintenance cost at the end-user organization, security tools need to become more automated. This session explains how automated tools can be used to translate security policy into technical security implementation for Cloud applications (using an approach known as “model-driven security”), so that security practitioners can better support financial rationale behind Cloud computing, and also influence Cloud providers to provide better security tools. The session will also cover how this approach helps achieve regulatory compliance for cloud.
SPEAKER:
Dr. Ulrich Lang is the co-founder and CEO of ObjectSecurity®, “The Security Policy Automation Company™”. ObjectSecurity’s OpenPMF™ product makes application security manageable through automation. Ulrich is a renowned thought leader, author and speaker on model-driven security, security policy, Cloud/SOA/middleware/application security, and has over 15 years of experience in information security. He received a PhD from the University of Cambridge Computer Laboratory (Security Group) on conceptual aspects of middleware security in 2003, after having completed a Master's Degree in Information Security with distinction from Royal Holloway College (University of London) in 1997.
Subscribe to:
Posts (Atom)