I received an email stating that some of our API query responses will break the upcoming limit. I replied to the email (as directed) with some queries but have not heard back yet.
Now some of the pages that I suspect would be above the limit are already returning errors several weeks before the limit deadline.
I’ll add my queries below but in the meantime can you please ensure the limit is not in place? As it stands several of our client’s most important pages are not rendering.
The questions I asked in reply to the email (I still need answers, please):
Hi, thanks for letting me know about this change.
I just started to look into paginating the query but realised it’s not possible so I need to ask for help on the best way to proceed.
The query mentioned in the csv file is triggered from a page with a collection of products. However, rather than querying multiple product documents that could be paginated, it’s just a single collection page.
Products are added to the collection as slices (along with other blocks like images and videos) so they are all returned in one result. We needed to implement it in this way so the client can control the order of product/image/video blocks on the page.
It looks like some of the collections have hundreds of blocks, which must be what’s causing the response to be so large.
Can you suggest a way to handle this situation so the pages work once you reduce the limit?
If there’s no way around it apart from reducing the number of blocks on a page, could you let me know the URL of the offending pages, please? I wasn’t able to find the ‘handle’ variable in the query csv file you shared.
I believe that a solution to our specific issue would be to allow graphQL slicing on Prismic slice zones. That way we could limit the number of slices returned in a response and request more through pagination. Is this a possibility?