October 1st API response limit already in place

I received an email stating that some of our API query responses will break the upcoming limit. I replied to the email (as directed) with some queries but have not heard back yet.

Now some of the pages that I suspect would be above the limit are already returning errors several weeks before the limit deadline.

I’ll add my queries below but in the meantime can you please ensure the limit is not in place? As it stands several of our client’s most important pages are not rendering.

The questions I asked in reply to the email (I still need answers, please):

Hi, thanks for letting me know about this change.

I just started to look into paginating the query but realised it’s not possible so I need to ask for help on the best way to proceed.

The query mentioned in the csv file is triggered from a page with a collection of products. However, rather than querying multiple product documents that could be paginated, it’s just a single collection page.

Products are added to the collection as slices (along with other blocks like images and videos) so they are all returned in one result. We needed to implement it in this way so the client can control the order of product/image/video blocks on the page.

It looks like some of the collections have hundreds of blocks, which must be what’s causing the response to be so large.

Can you suggest a way to handle this situation so the pages work once you reduce the limit?

If there’s no way around it apart from reducing the number of blocks on a page, could you let me know the URL of the offending pages, please? I wasn’t able to find the ‘handle’ variable in the query csv file you shared.

Thanks,

Pete

I believe that a solution to our specific issue would be to allow graphQL slicing on Prismic slice zones. That way we could limit the number of slices returned in a response and request more through pagination. Is this a possibility?

Hi Pete,

We have previously discussed this here, it should help clarify things further:

It sounds like what your experiencing is the URL length limit, like this user, rather than the payload limit which has not been activated yet.

Thanks.

Hi Phil, this particular graphQL query still works fine for almost all pages (and worked on this particular page until recently). The query is the same across all pages with a handle variable passed in being the only difference.

Surely, the request URL would be the same length for all pages (give or take a few characters for differences in the handle length)?

If so, then it must be the response that’s the problem, rather than the request, right?

Thanks,
Pete

I can't say for certain exactly what the issue is, but it's not the payload size limit.

If you send me the query, either here or in a private message I can investigate further for you :slight_smile:

Update:

This is indeed the 6mb payload limit. I think the reason the repo wasn't moved to the October 1st date for deployment was because the repo was not identified at this time as one which has a large response.

We apologise about this failure in communication and for the issues that this has caused.

Next week, we're going to deploy the proper message when it’s happen. Something like:

Response payload size exceeded maximum allowed payload

Sorry again about this.