Running the function getAllByIds no matter what only returns a max of 100 documents.
What steps have you taken to resolve this issue already?
I have tried using dangerouslyGetAll with custom filter, writing my own queryAll function which sends requests recursively. I have tried calling getAllByIds recursively (this works).
Errors
No errors, but the response total_results_size is 100, even if i send in 300 different ids.
The only workaround i have is splitting up the ids in chunks of 100 and doing multiple queries this way.
Versions
"@prismicio/client": "^7.6.0",
I can provide additional info on the requests sent in private DM if needed
Thank you for reaching out! This is what the documentation says on the getAll queries:
Get-all methods — like getAllByIDs, getAllByType, getAllByTag, getAllByEveryTag, and dangerouslyGetAll — perform a recursive query to fetch all matching documents from the API. These results are not paginated. They return only an array of document objects.
Note that the get-all methods are throttled. After the first request is sent, they wait at least 500ms before sending each subsequent request. This is meant to prevent issues on the API in case of large, complex queries. This will have no effect on queries returning fewer than 100 documents.
If you found a recursive method that works, you could go with that
I could be misunderstanding, but as i have understood it, the purpose of the getAll methods are to get ALL (relevant) documents. Now that im aware of the problem, there isnt really a problem for me to write the logic myself.
But it did cause issues for me before noticing it, so my thought is it could affect other people aswell.
The context where i encountered the issues is during a site wide migration, where we got in a prismic webhook saying updates made to these 300 ids. Then we simply run the getAll method to fetch all documents client.getAllByIDs(ids, {lang: '*'}
To which i get this result:
The expected behavior here would be to get 3 nextwork requests, each of 100 results, but since it responds with total_results_size: 100, it doesnt do any more requests. Even though there should be more documents able to be fetched.
I couldnt find how you send a DM here, but if it would be of use i could send the entire request
Well this is the response of a typical dangerouslyGetAll(). This function works, because it can run recursively, it detects there are 909 documents total, it links to the next request to fetch etc.
Compared to the previous response i sent, in which it wont run recursively, since it doesn't realise there are more documents to get.
Its a bit besides the point but our website is built using astro.js. Most webhooks will only contain a single document, therefor we only update the ones that are actually needed to update. But besides this we use getAllByIds for numerous different functions.
After discussing this with our team, we can confirm that the getAllByIds method is limited to returning a maximum of 100 documents per request. This is an API-side restriction, so even if you pass more than 100 IDs, the response will be capped.
The best workaround is to batch your IDs into groups of 100 and make multiple sequential requests. If needed, we can provide a snippet to help implement this. However, before recommending this approach as the best solution, we’d love to better understand your exact use case.
From what you’ve described, it sounds like this might be part of a one-off migration or an edge case where you’re fetching specific documents from a webhook. In most cases, triggering a full rebuild would be a more typical approach for such a large amount of documents. Could you share more about why you’re opting to fetch specific documents by ID instead of rebuilding so all queries of getByUid or getByType can be run again? That might help us suggest a more efficient solution tailored to your setup. (I ask this because your approach isn't one we've seen often.)
To briefly answer your last question. We have created an admin interface to copy over document types to other locales, do small updates to multiple documents at once etc. The way we do this is by fetching documents to update by their ids, and then make use of migration api. Then when we receive a web hook of documents being updated, we take the ids of the documents and run a new getAllByIds. Here we can do validation, display a nice view of which documents was latest updated etc.
But besides our usecase, there are multiple reasons as to why one might use the getByIds method, and whilst its rare that there would be more then 100 documents, I would rather have it "just work" then silently disregard the rest of the documents.
I appreciate your willingness to help us out, but since we have already created a solution that works for us, i mostly created this ticket in case there is something wrong with the method that could affect other users.
Its seems you deem the behaviour im describing as intended. I dont know if that is because we are misunderstanding each other, but I would otherwise disagree.
I'm aware each request is capped to 100, but that doesnt mean the method is capped to returning only 100. DangerouslyGetAll which i sent the response data above recognises 909 results, and it also returns 909 documents. Each request is capped to 100, but it recursivly sends more request.
Meaning getAllByIds is also meant to recursively fetch more documents, this is also the behavior described in the docs.
The recurvsivnes probably works as intended, but for some reason when including the filter for ids, the response of the first fetch doesnt recognise that there are more documents.
Thanks for following up and for sharing the details of your use case! After further investigation on our end, we can confirm that the behavior you’re experiencing is indeed a bug in the API, specifically related to how total_results_size and total_pages are calculated when filtering by IDs. While we do have an internal technical limit on the number of document IDs that can be fetched, it should not be blocking at exactly 100 like this.
That said, given the niche nature of this issue and our current priorities, we don’t have plans to update the API to address this in the near term. However, we do recognize that this results in an unexpected developer experience, so we are considering the following improvements:
1. Updating the documentation to explicitly mention this limitation and provide a recommended workaround.
2. Potentially adding a warning in the getAllByIds() method itself to flag this behavior and guide users to a solution.
For now, your approach of batching IDs in groups of 100 is indeed the best workaround, and we appreciate you bringing this to our attention. Your ticket has been valuable in highlighting this issue, and we’ll be taking steps to ensure it’s better communicated to other developers in the future.
Let us know if there’s anything else we can do to assist!