ReABAC, Next Generation Flexible Authorization

#programming

This article has been ported from my medium article.

When the Google Zanzibar whitepaper dropped in 2019 it re-invigorated an authorization model that hadn’t seen a ton of success with large organizations. There have been folks using ReBAC for over a decade at this point, but it’s generally been implemented more as AReBAC using graph systems. An example of this can be found in the whitepaper here by Syed Zain Raza Rizvi at the university of Calgary. This uses ReBAC as primary source of the authorization model but supported by attributes on the nodes and edges. My proposal is a flip of this model. ReABAC, which I define as Relationship-aware Attribute-Based Access Control.


ReBAC

We’ll get to what that means in a moment, but first, let’s go over what Zanzibar changed about ReBAC. In ReBAC, access requirements are expressed as queries on a directed graph. This allows you to express access requirements on complex (or simple) relationships. An example below: bob is a member of the editor group which has write access to a document (in cypher pseudo-code).

(bob:User)-[:MEMBER]->(group:Editor)-[:WRITE]->(blog-post:Doc)

You can then ask the system if bob can write to blog-post given an appropriate query. While a simple problem like this is easy enough to write into your system something much more complicated could be exponentially more difficult. There are two reasons for this.

What do you do if groups are hierarchical and documents are managed in a hierarchy of folders that could have individual users or groups attached? This complexity is a major reason why RBAC (role based access controls) and ABAC (attribute based access controls) have historically been significantly more popular.

The second problem is an architectural design problem. Generally speaking, applications cannot (and probably should not even if they can) operate 100% using a directed graph as their operational database. These databases are not optimized for traditional data store uses. This necessitates syncing between separate data stores. Each resource in the operational data store needs to be synced as well as the edges defining the relations. While this isn’t a terribly hard thing to do, it can be a ton of data needing to flow across continuously.

Zanzibar, while still a directed graph, works in a different way. It completely solves the first issue of query complexity, but only partially solves the second issue.

Zanzibar describes a system of two components. The first, is a representation of edges. These edges are described as a combination of elements in a tuple. This is their definition of the edge tuples (taken from the paper).

(tuple) ::= (object) '#' (relation) '@' (user)
(object) ::= (namespace) ':' (object id) 
(user) ::= (user id) | (userset) 
(userset) ::= (object) '#' (relation) 

To put this more simply, here are some plain English descriptions.

  • tuple a pairing of an object, a relation, and a user
  • object a combination of a namespace and an object id.
  • user a union of either a user id or a userset
  • userset a collection representing users that have been assigned a specific relation to an object.

Here is a representation of the previous cypher query in Zanzibar tuples.

group:editor#member@bob
doc:blog-post#writers@group:editor#member

Note that I changed the relation on the doc to writers. This is because Zanzibar includes something extremely important which is what they call userset_rewrite as part of the namespace definition. This namespace definition defines the relations between other object namespaces as well as allows for the computation of new tuples based on discovered relations.

Here is the doc namespace, I apologize if it’s not 100% correct, I can only do so much from a whitepaper.

name: "doc"

relation {
    name: "write"
    userset_rewrite {
        union {
            child { _this {} }
            child { tuple_to_userset {
                tupleset { relation: "writers" }
                computed_userset {
                    object: $TUPLE_USERSET_OBJECT # parent folder
                    relation: "member"
    } } }
} } }

What this means, is that it will compile the userset based on the parent object (in this case through the writers relation).

This means that you can query against bob having the write permission on the document and you will get the correct results without any complicated querying. It means it doesn’t particularly matter how complicated your directed graph is, you can still query things easily. This does however mean that the implementation side of the system is complex (to say the least), however it’s generally worth the cost (in my opinion).

You may also notice that I didn’t actually define the objects themselves anywhere. That’s because they aren’t defined within Zanzibar. Zanzibar is only a representation of the graph. You still need to represent your resource data in another system. This means that while you still need to have two databases, you do not have to sync resource information in both databases. If your operational database doesn’t require relations (as in it’s not a normalized transactional SQL database) you likely don’t have to sync anything as long as IDs and namespaces don’t change. You may however wish to clean up tuples relating to IDs of resources that have been deleted.

Late 2022 a handful of commercial and open source products using the model from Zanzibar came online. Oso and OpenFGA are couple, and in this document I’ll be using OpenFGA since it’s a production ready system for open source and significantly closer to the Zanzibar model. Oso alters the model and is much closer to AReBAC. This doesn’t mean it’s better or worse for any particular use-case, it’s just different.

OpenFGA

OpenFGA is a relatively “pure” implementation of Zanzibar. Some of the syntax has changed and the namespace configuration has been replaced with store level configuration. Namespaces in the syntax has been replaced with types. Users have also been removed. All objects can act as “users” with a type. Here are the equivalent tuples in OpenFGA.

[
  {
    "user": "user:bob",
    "relation": "member",
    "object": "group:editor"
  },
  {
    "user": "group:editor#member",
    "relation": "writers",
    "object": "doc:blog-post"
  }
]

The current format of the model schema is 1.1, and designed to be more human readable. Here is the example store config used.

model
  schema 1.1
type user
type group
  relations
    define member: [user]
type doc
  relations
    define writers: [user,group#member]
    define write: writers

This defines three object types. The group type has a relation called members that can accept any objects of the user type. The doc type has two relations (though it could be made into one). The writers relation will accept either objects in the user type, or usersets with the member relation on the group type. The write relation simply locally references the writers relation locally.

This is a pretty simple model, you could easily do this outside of a Zanzibar based ReBAC system. Here’s one more complicated.

model
  schema 1.1
type document
  relations
    define can_edit: can_edit from parent
    define can_view: [user] or can_view from parent
    define parent: [folder]
type folder
  relations
    define can_edit: [user,group#member] or can_edit from parent
    define can_view: [user,group#member] or can_view from parent
    define parent: [folder]
type group
  relations
    define member: [user] or member from parent
    define parent: [group]
type user

This defines hierarchical groups and folders containing documents with multiple permissions per document. You could then ask. Does user:bob have can_write on document:12345. It doesn’t matter where in the graph these objects are, the relation query is simple to implement. It also doesn’t matter how they have access for the check, though you can inspect the relation to find out how if you want.

ABAC

Attribute based access control is a method where you evaluate permissions via policy documents in combination with a subject (what wants access) and a resource (what the subject wants access to). You can then check for a specific permission against the subject and the resource to validate whether or not the subject can perform that action.

Individual permissions are defined as the result of logical operations against attributes regarding the subject and resource. Here is an example. Let’s say that we have a document. We want to protect a write permission on that document. We can represent that in pseudo-code.

can_write := (sub.org_code = 'LPL' and sub.role.contains(resource.required_role)) or resource.has_public_read

The most successful implementation of ABAC I am aware of is AWS IAM. It allows for extremely nuanced resource to resource controls alongside basic role based controls.

AReBAC

Now we get back to ReABAC. This is a paradigm where some attributes on access policies are represented as OpenFGA checks and are rendered prior to executing the policy documents. This has a couple benefits as opposed to “pure” Zanzibar.

The most obvious is that organizations have existing authorization models and tools. It’s not always easy to migrate systems wholesale to a new authorization model. A ReABAC model can help to migrate permissions slowly by re-implementing the existing model, with branches in conditional logic for the new implementation.

The other is that sometimes you just need a branch in authorization logic based on attributes. An example could be that a resource can be sometimes managed by a certain relation, but if it contains a certain attribute it requires a completely different permission. This kind of branching conditional logic, a strong point of ABAC, being applied to ReBAC means that you can remove that logic from application code and into the model.

Proposed Implementation

My implementation uses a dual-model approach where permissions are described through a combination of a policy document, and an OpenFGA store for each of what I’m calling namespaces. These would effectively be tenants, logically isolated from each other, but using the same databases and services.

OpenFGA would not be accessible directly, but through another API. This API would control the namespaces, and fully manage the OpenFGA stores linked to each namespace. Creating a namespace, would create and link an identical store. The currently configured store version (authorization-model) would not be updated each time the store is updated, but managed through another endpoint to ensure changes can be made and tested prior to publishing.

Namespaces would also manage a policy document. This would be a YAML (or JSON) based document containing a list of Policies as defined in the following types.

interface Policy {
  permission: string;
  resourceType: string;
  rule: RuleKind[];
}

type RuleKind = RuleOp | RuleOr | RuleAnd | RuleMatch;

interface RuleOp {
  when: string;
  eq?: string;
}

/*
 * The match string is in the following formats:
 * - "{relation}" - matches the relation on the target object.
 * - "{objectType}:{objectId}#{relation}" - matches the relation on a different object.
 */
interface RuleMatch {
  match: string;
}

interface RuleOr {
  or: RuleKind[];
}

interface RuleAnd {
  and: RuleKind[];
}

Policies can also define wildcard permissions or resource types. In this case it will apply to all permissions or resource types. This is useful for application admin permissions. Here is an older example of a policy document.

permission: *
resourceType: *
rules:
  - match: global_roles:admin#member
---
permission: edit-credentials
resourceType: organization
rules:
  - and:
      - match: edit
      - when: is_unmanaged
---
permission: edit
resourceType: organization
rules:
  - match: engineer
  - match: owner
---
permission: edit-pipelines
resourceType: organization
rules:
  - and:
      - when: private-metadata.pipeline-management
        eq: "true"
      - match: engineer
---
permission: approve-credentials
resourceType: organization
rules:
  - match: engineer
---
permission: edit
resourceType: app
rules:
  - match: edit

I’ve only got one permission based on a parent-child relation but that’s where slow-rolling things can help out a ton. The current method may only work on organizations, but with ReBAC we can model it more correctly. There is however no rule that we cannot build our policy document off of organization and app metadata when we execute the policy.

One use-case here that I had a hard time modeling in pure ReBAC is that certain credentials (I call them managed credentials) cannot be managed by engineers on the team even if they belong to the org containing the credential. Those credentials can only be managed by admins of the system.

When you want to check permissions that a subject may have regarding a resource. You will call an endpoint with this CheckRequest object and get back a string array with the permissions as a result.

interface CheckRequest {
  namespaceId: string;
  subject: ResourceId;
  resource: ResourceId;
  context: unknown;
}

interface ResourceId {
  type: string;
  id: string;
}

The context field will contain any attributes needed for policy execution. These can be resource or subject attributes but they do not have to be.


My ultimate goal with this system is to resolve issues with traditional role based access controls by combining two next generation techniques into one, giving a path to migrate existing systems to reduce complexity and risk in a change like this.


Welcome to my website! I like to program.