CDN Timeouts at

Hey, So we previously had a bespoke timeout that fired after 1.5 seconds but we switched that using the timeout option provided by the prismic-javascript package.

We are still suffering timeouts because the response takes longer than 1.5 seconds. Is this normal behaviour?

@Fares Just to let you know we had over 100 requests between 3 and 4pm yesterday take over 1.5s to resolve. This led to a timeout (that we configured) and content not being shown on the page, affecting our SEO performance.

Hi David,

I’ve searched our logs and I found some timeouts, so I will create a ticket on our issue tracker and reach back to as soon as possible.

1 Like

Hey Fares,

Just a quick update to say we had another spate of timeouts yesterday between 14:00 and 17:00. Like the above, these are incidents where we have terminated the connection with Prismic because the request exceeded >1.5s.


Hi Tom,

I’ve updated the issue and we will reach to once we have more info about this.


I have started experiencing timeouts again this afternoon. Nothing changed on our end to impact/cause timeouts. Please find the timestamps in the screenshot. Could you please help on the issue ?

1 Like

Hi @Fares,

We’re again seeing a large number of timeouts, starting ~13:45 up top now.

From ~15:15 the number of requests which were timing out significantly increased.

No changes our side - would you be able to let us know when the issue might be resolved?



1 Like

Thank you for letting us know, I will update the ticket with the new info.

Hey @Fares, it’s been quite some time, and numerous timeouts. You have updated the ticket, which we appreciate but I don’t think it should be taking this long to get some resolution to our degraded performance. You mentioned 2 weeks ago there were timeouts on your end. Is this something we should just expect and handle, or is there something (other than utilising the timeout) we can do to mitigate brownouts from the Prismic API.

I would appreciate more of a response than just updating the ticket each time.

Hi David,

Thank you for your patience, I have already brought this issue multiple times in our internal meetings, and we are still investigating this issue.

In fact I’m currently in contact with our devOps team and they are investigating this issue.

I will get back to you as soon as I get more info,

1 Like

Thanks @Fares, really appreciate it.

Hey David,

So the dev team is still investigating the issue.
We have noticed that only 0.3% of requests goes over 0.8s that still doesn’t meet our standards and we need to continue investigating the issue.
For that, we have already moved you to another cluster.

Hey David,

Do you still have this issue?


This issue has been closed due to inactivity.

Re-opened on request from @thomas.cawthorn

Thanks for reopening the ticket @Phil.

We're seeing a slow-down in response times this afternoon:
Batch 1:

  • First error 2020-10-06@14:04:59.912 GMT.
  • Last error 2020-10-06@14:07:31.179 GMT.

Batch 2:

  • First error 2020-10-06@14:34:55.524 GMT
  • Last error 2020-10-06@14:34:34.963 GMT

Our error threshold is set to 1500 milliseconds.



Hi Tom,

Thanks for reaching back to us, the issue I have created for this subject is still opened in our issue tracker.
I have updated it to let our production team that you have started to experience some timeouts again.

I don't know how sophisticated your observability system is, but can you calculate the percentage of timeouts you are getting from the overall requests you do?

I will let you know once we get some more info,


Just to let you know this issue has been assigned to a project the aims to fix this issue.

There is no ETA for this, and we will let you know if we get any updates on this issue.

This issue has been closed due to inactivity. Flag to reopen.