In a blog published in October (Identity Security and Artificial Intelligence), as part of #BeIdentitySmart week, the AI/ML Technical Working Group subcommittee outlined a blog series that will explore how Artificial Intelligence and Machine Learning are being utilized today, and how they can be used in the future to provide organizations with more effective Identity Security. In this blog, the first to go deeper in this exciting evolution, we will dive into access requests and provisioning.
What is an access request?
To understand how advances in artificial intelligence can be applied to improve identity-based access requests, it is necessary to explore the common components that make up a request.
First, there are the actors – requester, requestee, and (optionally) approvers:
- Requester: The person making an access request – this could be a user requesting for themselves, a manager requesting for a direct report, or even an automated process requesting access in accordance with a pre-defined policy.
- Requestee: This is the target of the request – normally a user or group of users, but more and more frequently this term can refer to non-human or machine identities.
- Approver: A person, or persons responsible for reviewing the contents of a request to determine if appropriate access is being granted. This could be a manager, a security group, or an asset owner.
Second, there are the permission objects – these represent the actual access that an identity requires and would receive should the request process complete. To keep it simple, let’s look at access as two objects:
- Account: The identity’s actual account on an IT system
- Access: What level of access the identity has associated to their account on a connected IT system. This could be as simple as a group assignment in a directory service, or as complex and low level as a Linux permission on a specific file
Lastly is the request process itself. Generally speaking, there are three major steps that make up a request:
- Submission: An identity needs access to complete a work-related function. The user determines what access they need and makes a request. As noted earlier, this process can also be data driven – new user joins an organization, and a request is automatically submitted for their day one access. The submission itself can originate in several places – a governance platform, an ITSM, messaging tools, or even via an email to the IT department.
- Approval: An entity – typically a manager or asset owner – that reviews the content of the request and determines if this access is truly needed and appropriate and does not violate policy.
- Provisioning: Once approved, the requested access is provisioned either programmatically through technical integrations between systems or manually via an agent.
Conceptually, the actors, objects, and process in an access request are straightforward. However, in practice things quickly become more complex.
Take for example the case where an identity is new to an organization or project – how do they know what access they are going to need? The common solution is to rationalize access into Role-based or Attribute-based access control policies. This makes it easier for end-users to easily find the permissions they need. However, there is a greater than zero chance that some access that is required will fall outside of defined roles or attribute policies. There is also the potential that some roles or policies will be overly broad – containing access that isn’t necessary for all identities that fit that role. How often are these policies reviewed and validated?
Once the (hopefully) correct additional access has been requested – the approver(s) need to (fingers-crossed) make the right decision regarding whether the access is appropriate. How does the approver answer this question? An asset owner may know exactly what the access being requested allows a user to do – but a manager may not have that depth of understanding. An approver should have confidence that they fully understand the ramifications of allowing an identity to be granted access. To truly do this, they would need to look at not only the contents of the request but the current state of the requesting identity, their current access, risk level, and behavior (past and present). No one has time for that kind of deep investigation, so a different approach is required.
Simplification through AI and ML
A cornerstone of an effective ‘Zero Trust’ architecture is the principle of least privilege (PoLP). Application of the PoLP in the context of access requests means that when a user makes a request – the access it contains is the minimal set of permissions the user requires to complete their authorized function. Role and attribute-based access policies can be an amazing way to logically and repeatedly provide users with only the permissions they need based on their identity profile, but these controls also have issues that need to be addressed.
Permissions can also be overly broad or too narrow in scope – leading to either overly privileged users, or unchecked policy proliferation. Access policies should be continuously monitored, re-evaluated, and modified to ensure accuracy. Ideally, this process should be asking many questions to determine the validity of a role or policy. Does any combination of access in this policy create a situation that could potentially lead to greater risk? Have there been actual realized incidents based on the risk potential of this access? Does this policy contain permissions on old, or no longer used applications? Is a user required to take additional training before this access should be applied?
In order to answer these questions – and many others – countless signals from across the IT space need to be analyzed. Given the large amount of data that needs to be processed, continuously – the application of machine learning and artificial intelligence here could be of great benefit. Taking as input signals from security technologies – CASB, SIEM, UEBA, MDM – a robust AI would ensure that the access policy model is continually refreshed and kept up to date.
Provided there is a zero-trust compliant policy program in place, whether that is attribute or role based (or some combination of both), the next challenge in identity-based access requests is – how does an end-user know what access they need? Let’s look at how this tends to be oversimplified with a hypothetical set of actors, and along the way see how AI/ML can help. Bob is an HR coordinator, so Bob needs the same access as the other HR Coordinators. Alice is an engineer, so Alice needs the same access as other engineers. This makes sense – but ignores the subtlety of access. What team is Alice on? What projects is Alice actively engaged with? Where is Alice located? Has Alice taken the training required for access? Is Alice the target of an unusual number of phishing emails? Does Alice have any upcoming HR events? The number of disparate signals required to determine what Alice should be able to request is large, as well as fluid. AI/ML can assist here by providing greater visibility and insight to the requester at time of request. Intelligent access recommendations based on concepts like peer group analysis and/or correlation of behavioral data to users can greatly simplify the request process and manage the subtlety of access away from the end user.
Once the appropriate access has been requested, there are often regulatory or organizational requirements for approval of access. Again, on the surface this is simple – Carol requests access and her manager Dave approves. But how does Dave know that the access Carol is requesting is appropriate? On a small, focused team this could be straightforward – the access required is well defined, and the role of the individual is well known, and the approver doesn’t have to make a large number of approvals. But what happens to this process as teams get larger, workers are more distributed, and identities participate in cross-functional projects?
Approvers are faced with more approvals, with less context and ability to make an informed decision. This can often lead to approval fatigue and ‘rubber stamping’. Artificial intelligence can be applied to approvals as well. Providing an approver with instantaneous recommendations (approve/reject) based on the contents of the request and the requesting identities context (more on this later) means that approvals are done more quickly, and more appropriately without introducing uncertainty on the part of the approver. What does ‘identity context’ mean? In this case, it is considering the user’s existing access, historic access, attributes, and behavior. It can also consider whether the access being requested has been involved in any security related incidents. If that access is part of a role, or an attribute-based policy – has this role/policy been reviewed recently?
AI/ML can bring tangible benefits to the access request process, but practical challenges present themselves as we contemplate implementation.
The machine learning algorithms that allow an artificial intelligence to make accurate recommendations and decisions require a massive amount of data from many disparate systems. The more data, the more accurate any AI based action will be. Gathering this information from systems throughout an organization is critical. The lack of uniform data models across applications makes collecting risk signals difficult. The identity data that the algorithms need to process is also sensitive (containing Personally Identifiable Information) and unique to organizations. Analyzing access and providing accurate recommendations also require looking at organization-specific signals and trends over time. All these reasons make implementing a day-one solution challenging – and a period of ‘ramp up’ is often required.
Simplification of complex decision-making using recommendations based on machine learning stops short of full automation. An artificial intelligence isn’t directly deciding what access is requested and what access is approved. While that level of automation is a logical next-step, the reality is that there is still a certain amount of distrust or hesitancy in having a zero-touch process in charge of requests and approvals. This is especially true if compliance requires recording who was responsible for the approval. For this reason, it is important that there is transparency and understanding around why AI/ML makes a recommendation – both to end users as well as to auditors.
About the Author: The Artificial Intelligence and Machine Learning Technical Working Group subcommittee was formed in July 2020. The team, led by Adam Creaney, includes Srinivas Kasula, Tom Malta, Asad Ali, Allen Moffett, Andrea Tomassi, Jerry Chapman, Namson Tran, Eric Uythoven and Ravi Erukulla.