I am also ‘Andrew’, the admin of this server. I’ll try to remember to only use this account for posting stuff.

  • 0 Posts
  • 9 Comments
Joined 8 months ago
cake
Cake day: February 17th, 2025

help-circle

  • It’s straight-forward enough to do in back-end code, to just reject a query if parameters are missing, but I don’t think there’s a way to define a schema that then gets used to auto-generate the documentation and validate the requests. If the request isn’t validated, then the back-end never sees it.

    For something like https://freamon.github.io/piefed-api/#/Misc/get_api_alpha_search, the docs show that ‘q’ and ‘type_’ are required, and everything else is optional. The schema definition looks like:

    /api/alpha/search:
        get:
          parameters:
            - in: query
              name: q
              schema:
                type: string
              required: true
            - in: query
              name: type_
              schema:
                type: string
                enum:
                  - Communities
                  - Posts
                  - Users
                  - Url
              required: true
            - in: query
              name: limit
              schema:
                type: integer
              required: false
    

    required is a simple boolean for each individual field - you can say every field is required, or no fields are required, but I haven’t come across a way to say that at least one field is required.


  • PieFed has a similar API endpoint. It used to be scoped, but was changed at the request of app developers. It’s how people browse sites by ‘New Comments’, and - for a GET request - it’s not really possible to document and validate that an endpoint needs to have at least one of something (i.e. that none of ‘post_id’ or ‘user_id’ or ‘community_id’ or ‘user_id’ are individually required, but there needs to be one of them).

    It’s unlikely that these crawlers will discover PieFed’s API, but I guess it’s no surprise that they’ve moved on from basic HTML crawling to probing APIs. In the meantime, I’ve added some basic protection to the back-end for anonymous, unscoped requests to PieFed’s endpoint.






  • I’d be wary of getting a conversation node from anybody other than the original author (as described in the second approach).

    There’s a reason why, if you want to resolve a missing post in Lemmy, etc, you have to use the fedi-link to retrieve it from its source, not just from any other instance that has a copy (because, like the “context owner”, they could be lying).

    For Group-based apps, conversation backfill is mostly an issue for new instances, who might have a community’s posts (from its outbox), but will be missing old comments. Comments can be automatically and recursively retrieved when they are replied to or upvoted by a remote actor, but fetching from the source (as you arguably should do) is complicated by instances closing (there’s still loads of comments from feddit.de and kbin.social out there - it will be much worse when lemm.ee disappears). So perhaps Lemmy could also benefit from post authors being considered the trusted owner of any comments they receive.


  • PieFed is just a Fediverse platform that aims to inter-op with Lemmy in much the same way that it aims to inter-op with any other Group-based platform (MBIN, PeerTube, NodeBB, Wordpress).

    Lemmy’s “quirks” are the reason why your account won’t see Polls from MBIN, or channels from PeerTube, or posts from NodeBB, or backfilled content from Wordpress.

    It’s not my intent to criticise Lemmy, but these are verifiable problems, whereas it doesn’t seem fair to criticise PieFed for problems that you can’t clearly remember.