Identity Security and Artificial Intelligence: Certification of Access

In a blog published in October of 2021, Identity Security and Artificial Intelligence, the AI/ML Technical Working Group subcommittee outlined a blog series that explores how the application of artificial intelligence (AI) and machine learning (ML) can help solve some of the complexities of modern Identity security practices as well as the challenges involved.  In the first deep dive blog, we focused on access requests and provisioning and in this blog, we will focus on the periodic or continuous certification of identity access.

What is Certification?

To understand how advances in artificial intelligence can be applied to improve identity-based certifications, it is necessary to explore the common components that make up a certification.

As with our previous blog discussing the application of AI/ML to access requests, we need to look at the actors – the certifier and the target of certification.

  • Certifier: The person responsible for determining whether access is appropriate. This could be a manager reviewing their direct reports’ access assignments or an IT resource owner reviewing all the identities that have access to their application. This is also the person that can be held responsible during an audit if any findings are discovered.
  • Target: For an identity-based certification, this is the user whose access is being reviewed.

Equally as important are the entitlement objects contained in the certification. The term entitlement encompasses all the access, permissions, privileges, etc., that are assigned to identities in an organization. 

Certification, also referred to as attestation, is a critical governance process by which an identity’s access is reviewed in its entirety by another individual. Access that is necessary for an identity’s job profile is retained, and access that is not appropriate is removed. This helps organizations maintain a principle of least privilege approach to their identity security. Through the proactive review process, the risk of access creep and the gradual over-accumulation of privileges by an identity is also reduced.

Periodic certification of all identities within an organization is a common security control that organizations put in place to maintain compliance with various regulatory requirements. Modern trends such as cloud migration, cloud acceleration, and robotic process automation have vastly increased both the type and the number of identities that need to be reviewed. The sheer volume of access to be attested to is quickly outgrowing the capacity of individual certifiers.

Hidden Complexity

One of the most basic challenges with manually certifying access is determining the minimum set of privileges that an identity needs in order to complete its job function. This can be especially difficult if the units of access assigned are not easily understandable. In an ideal world, everything is clearly labeled and human-readable – but this is rarely the case. Low-level entitlements and privileges are often confusing to a certifier. They must pause their certification efforts to track down what the access is before they can determine if it is appropriately assigned to the target. The larger the certification campaign, the longer this can take – which leads to the undesirable outcome of periodic certifications extending past their deadline and becoming overdue. This puts organizations at risk of being out of compliance.

With the explosion of identity caused by recent shifts to remote work, cloud migration, cloud acceleration, and digital transformation – the sheer scale of traditional compliance campaigns has become a significant challenge. A single certifier, instead of attesting to a few dozen access assignments, is now responsible for deciding hundreds (or more!) line items in a single campaign. This is not necessarily new, as some of the largest and most complex organizations have long been dealing with this issue. However, it has become much more prevalent due to the reasons listed above. The risk this poses is that instead of meaningfully determining for each target and each access whether the combination is valid – human nature kicks in, and certifiers begin to make bulk decisions. This is what is called ‘rubber stamping’—and it can lead to access being inappropriately removed, causing friction with the business, or it can lead to over-privileged identities remaining after a compliance campaign is completed.

Lastly, there is the challenge of ‘certification fatigue’. As mentioned earlier, it is extremely common for organizations to undergo compliance campaigns, at minimum, on a quarterly basis. This is often seen as a chore for the certifiers. The larger the organization or team, the bigger the chore. This fatigue can again lead to the rubber stamping of access or haphazardly approving/revoking access – which decreases the fidelity of the review and increases the risk of audit findings.

Simplification through AI and ML

AI/ML-generated recommendations are a great tool for certifiers to use when determining whether a target should retain or lose access. These recommendations can be based on several factors. First, peer group analysis can help understand if the access rights of the target under review are the same or like other identities in the same job function or the same team. The actual data points that can be compared are numerous, but the outcome is that certifiers can be aided in their decision-making process by being presented with some corroborating evidence around the assignments.  Second, information around usage can be incorporated into the model to provide recommendations that would remove access that is stale – meaning that it hasn’t been used in a significant amount of time. Lastly, and going beyond just looking at peer groups and usage, AI/ML recommendations can also look at other security concerns that could and should determine whether access is appropriate. For example, if a user has highly sensitive access to several critical business applications – but has also failed to complete the necessary security training and has a history of downloading malicious files – perhaps that user should lose access temporarily until these other risk factors are mitigated.

With proper training, it is also possible to get to a stage of maturity where the AI/ML engine is completing certifications continuously with little to no human intervention, providing real-time tuning of access based on all the above recommendations. If full automation isn’t desired, the completed certification would then be presented to a certifier for final review and sign-off. This simplifies the involvement of the certifier, decreasing certification campaigns’ duration and increasing their accuracy.

Challenges

While the benefits of utilizing AI and ML to make compliance campaigns more efficient are convincing, there are also some challenges with actual implementation. First, tuning recommendations provided in a certification is difficult and takes a lot of time and data to get to a point where they are accurate and informative. Just because an identity’s access doesn’t line up with those that have similar profiles does not necessarily mean it is inappropriate. There may be other contextual factors that make this outlier access acceptable. These false positive outliers may have their access removed based on a recommendation.

Which leads to the ultimate issue: Trust.

When a certifier signs off on the decisions made in their certification, they are stating, in a non-refutable way, that the access decisions they have made are accurate and apply to organization security policies. Should something go wrong, or access anomalies are discovered during an audit, the repercussions can be significant for the business and for the certifier. The level of confidence a certifier must have that the recommendations being provided by an AI are accurate needs to be very high. This level of trust is an even more significant challenge if the intention is to allow the AI to make decisions autonomously during automated certification campaigns. There could be a significant impact on business productivity and continuity if critical access is revoked mistakenly. Therefore, the weighting factors used to determine whether access is appropriate need to be well defined and visible both at time of certification and in the future, when understanding the context of a decision is important.


About the Author: The Artificial Intelligence and Machine Learning Technical Working Group subcommittee was formed in July 2020. The team, led by Adam Creaney, includes Srinivas KasulaTom MaltaAsad AliAllen MoffettJerry ChapmanEric Uythoven and Ravi Erukulla.

Related Articles

Background

READY TO MAKE AN IMPACT?

Let's work together to help everyone become more secure.