Hi all!
I don't know if anyone else has experienced this - perhaps it's related to the Amazon Web Services hack yesterday?
Last night it looks like our Automation for Jira tried to run every automation at once on random tickets. Between 5:30 and 10:15pm, it created 12 new tickets that contain the opposite of the trigger (the trigger is a custom field is edited to be yes, but none of the tickets were edited and that custom field was no on all of them). I also received dozens of failure notices on other automations, which usually write fields. They all have an error that says "This component has exceeded the rate limit. Internal rate limit exceeded". We're on an enterprise plan with unlimited automation runs, so I assume that this is like a throttling problem since apparently all the automations were triggered around the same time.
Any ideas what might have happened? Weird glitch? Something someone accidentally triggered? There's only 2 site admins and we were both logged off at the time. Not sure about our enterprise admins.
Hi!
Since you are on Enterprise, you also have Enterprise support with Atlassian. Considering the issues yesterday with AWS, where Automation was also impacted, my first reach out would be Atlassian support because they are the only one able to see whether this was a direct result or not.
I would rule that out first, because the Community is probably not going to be of much help if this was a result of the outage.
Thanks! I'm reaching out to them as well, just looking to see if anyone else was impacted!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi @Cailin Che
It sounds like your instance experienced a sudden spike in automation executions, which can cause unexpected behavior such as automations triggering on random tickets and rate limit errors, even on an enterprise plan. Based on my experience, this is typically due to a large number of automation rules running in parallel, which can overwhelm the rule processing queue and lead to delays, inaccurate triggers, or rules firing on issues that don't match the intended conditions.
Running bulk imports that trigger automations or automation rules with looped "send web requests" actions, or "multiple automations being triggered by a single event, causing a cascade" can causes for such spikes.
The "Internal rate limit exceeded" error indicates that the automation service temporarily throttled rule executions to maintain system performance, even on enterprise plans.
To investigate: I will advice you review the Global automation performance insights page to see if there was a spike in automation activity during the affected timeframe.
Audit your automation rules for any recent changes or scheduled/bulk actions.
Check if bulk operations performed or if any system-wide changes were made.
I don't think this behavior is related to external outages (like AWS), but rather to internal automation load. If this was a one-time event, it may have been triggered by an unusual automation or bulk operation.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.