Skip to content

Invite links gql#2745

Open
GregorShear wants to merge 2 commits intomasterfrom
greg/invite_links
Open

Invite links gql#2745
GregorShear wants to merge 2 commits intomasterfrom
greg/invite_links

Conversation

@GregorShear
Copy link
Contributor

@GregorShear GregorShear commented Mar 6, 2026

  • Add internal.invite_links table to replace the directives-based invite flow with a simplified model
  • Implement GraphQL query and mutations (inviteLinks, createInviteLink, redeemInviteLink, deleteInviteLink)
  • Single-use links are deleted upon redemption; multi-use links persist
  • Include transitional dual-write triggers that sync between public.directives and internal.invite_links, plus a backfill of existing unredeemed grant directives — to be removed after the UI adopts the new GraphQL API
mutation Create {
  createInviteLink(
    catalogPrefix: "acmeCo/"
    capability: admin
    singleUse: true
  ) {
    token
    catalogPrefix
    capability
    singleUse
    detail
    createdAt
  }
}

mutation Redeem {
  redeemInviteLink(token: "a1b2c3d4-e5f6-7890-abcd-ef1234567890") {
    catalogPrefix
    capability
  }
}

mutation Delete {
  deleteInviteLink(token: "a1b2c3d4-e5f6-7890-abcd-ef1234567890")
}

query {
  inviteLinks(
    catalogPrefix: "acmeCo/"
    filter: { singleUse: { eq: false } }
  ) {
    edges {
      cursor
      node {
        token
        capability
        singleUse
      }
    }
  }
}

Test plan

  • Snapshot tests for create, redeem, unauthorized create, bad token, and exhausted single-use redemption
  • Manual test of transitional triggers: create a directive via PostgREST and confirm it appears in internal.invite_links
  • Verify backfill picks up existing unredeemed grant directives

@GregorShear GregorShear force-pushed the greg/invite_links branch 3 times, most recently from cb796df to ac9b544 Compare March 9, 2026 17:16
/// Reusable filter input types for GraphQL queries.

#[derive(Debug, Clone, Default, async_graphql::InputObject)]
pub struct BoolFilter {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wanting to start a parallel discussion here about standardizing optional filters that could eventually be exposed in the UI. Bool is the simplest example, here are a couple others that would follow the pattern:

#[derive(Debug, Clone, Default, async_graphql::InputObject)]
pub struct StringFilter {
    pub eq: Option<String>,
    pub ne: Option<String>,
    pub starts_with: Option<String>,
    pub contains: Option<String>,
}

#[derive(Debug, Clone, Default, async_graphql::InputObject)]
pub struct IntFilter {
    pub eq: Option<i64>,
    pub ne: Option<i64>,
    pub gt: Option<i64>,
    pub gte: Option<i64>,
    pub lt: Option<i64>,
    pub lte: Option<i64>,
}

Copy link
Contributor

@jshearer jshearer Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like I was talking about below, unless we're applying these filters over a homogenous-enough surface area (i.e, it's not actually clear to me that the UI should be the same datagrid/table sort of thing for captures, collections, materializations, storage mappings, data planes, tenant members, etc), then I'd actually vote for keeping it specific to each resolver. Thoughts?

Or maybe keep these StringFilter/IntFilter reusable chunks, but only implement them as needed in the resolvers where they make sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this is the right compromise - having a standardized pattern can only help, and how far we take the implementation depends on what's needed vs how painful it gets as it grows

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can definitely see the benefit of having consistent terminology for different predicates. Though a problem I foresee with the approach is that not all resolvers will be able to support all predicates. For example, for some we might only support eq, and others maybe only startsWith. I mostly just wouldn't want to have our graphql schema suggest that you could use a contains filter on liveSpecs names, for example, when we don't want to support such queries in the database.

I'm a little on the fence about liveSpecs(by: { prefix: "acmeCo/" }) vs liveSpecs(by: { catalogName: { startsWith: "acmeCo/" } }). I do kinda like the latter, though IDK whether I'd like it better enough to justify migrating existing resolvers or living with two different conventions. Maybe it'd help to understand what the migration process would look like? If we wanted to have some limited backward compatibility during a transition period, would that be pretty easy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

migration would be easy - we can deprecate the current param and add it to the filter section ... decide later when we think it's safe to remove the deprecated one.

/// The capability level granted by this invite link.
pub capability: models::Capability,
/// Whether this invite link can only be used once.
pub single_use: bool,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this replaces the directives.uses_remaining field
(and we'll delete consumed invites, instead of setting uses_remaining = 0)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test comment please ignore

/// The secret token for this invite link.
pub token: uuid::Uuid,
/// The catalog prefix this invite link grants access to.
pub catalog_prefix: models::Prefix,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

related to the optional filters, i'd like these resolvers to do their own check of which prefixes the user can see, and then this catalog_prefix can be an optional filter to narrow the results further.

would love to talk this through w/ phil & joseph

Copy link
Contributor Author

@GregorShear GregorShear Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The other thing i like about this is that we can define the operators and standardize the prefix filters that phil and i were chatting about a few weeks ago (prefix vs prefixedBy vs exactPrefixes` etc...)

#[derive(Debug, Clone, Default, async_graphql::InputObject)]
pub struct PrefixFilter {
    pub eq: Option<models::Prefix>,
    pub starts_with: Option<models::Prefix>,
}

so an example gql query might look like

query links {
  inviteLinks(filter:  {
     singleUse:  {
        eq: true
     }
     catalogPrefix: {
        startsWith: "gregCo"
     }
  }) {
    ...
  }
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah... I both do and don't like this. Back when I wrote a very similar generalized graphql resolver filter for Narwhal's GraphQL API, the reason it was tractable at all was that the entities we were querying over were homogenous enough that there was one centralized place (the "query builder") where we could turn these filters (startsWith, lt, eq, and, or, etc) into a big WHERE clause which could then be applied to any query we wanted.

If we can't do that, then it becomes a combinatorial nightmare of having to support every type of filter on every graphql resolver's individual query/queries.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will also note, I spent a lot of time in there optimizing the monstrous queries it would generate. So we should probably not follow that same exact pattern here :P

@GregorShear GregorShear force-pushed the greg/invite_links branch 2 times, most recently from 24d51c2 to e1271ed Compare March 9, 2026 20:32
created_at AS "created_at!: chrono::DateTime<chrono::Utc>"
FROM internal.invite_links
WHERE catalog_prefix::text ^@ $1
AND ($4::bool IS NULL OR single_use = $4)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optional filters will make these queries fairly long...


// Delete single-use invite links upon redemption.
if invite.single_use {
sqlx::query!("DELETE FROM internal.invite_links WHERE token = $1", token,)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just calling out new behavior - delete single-use invites when they're redeemed

tracing::info!(
%catalog_prefix,
?capability,
%claims.sub,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jshearer i suppose this could be the record of who invited the user...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't keep logs indefinitely. If it's not required for compliance then I'm 100% whatevs about tracking who invited whom, I was just spitballing features that the existing directive's implementation could offer that we might care about

@GregorShear GregorShear force-pushed the greg/invite_links branch 2 times, most recently from 242ab53 to 8904603 Compare March 9, 2026 20:45
@GregorShear GregorShear marked this pull request as ready for review March 9, 2026 20:46
@GregorShear GregorShear requested review from jshearer and psFried and removed request for jshearer March 9, 2026 20:46
@GregorShear GregorShear force-pushed the greg/invite_links branch 4 times, most recently from 5bcf582 to 56a525c Compare March 10, 2026 13:56
Copy link
Member

@psFried psFried left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found a few things that are probably worth changing. I'm still interested in resolving the conversation on BoolFilter, but just wanted to get you the feedback that I have now.

cursor: String!
}

input InviteLinksFilter {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually like this _Filter naming better, though the inconsistency with _By seems unfortunate. IMO if we're going to switch the convention, then it's probably worth trying to switch everything else over, too. I'm kinda on the fence as to whether it's worth while, though. What do you think?

/// Reusable filter input types for GraphQL queries.

#[derive(Debug, Clone, Default, async_graphql::InputObject)]
pub struct BoolFilter {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can definitely see the benefit of having consistent terminology for different predicates. Though a problem I foresee with the approach is that not all resolvers will be able to support all predicates. For example, for some we might only support eq, and others maybe only startsWith. I mostly just wouldn't want to have our graphql schema suggest that you could use a contains filter on liveSpecs names, for example, when we don't want to support such queries in the database.

I'm a little on the fence about liveSpecs(by: { prefix: "acmeCo/" }) vs liveSpecs(by: { catalogName: { startsWith: "acmeCo/" } }). I do kinda like the latter, though IDK whether I'd like it better enough to justify migrating existing resolvers or living with two different conventions. Maybe it'd help to understand what the migration process would look like? If we wanted to have some limited backward compatibility during a transition period, would that be pretty easy?

Also including transitional dual-write triggers and backfill migration so we can move off of the directive-based invite links. These will be removed in a future migration.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants