We started using the JWME script, and everything was fine until we started doing a big amount of work with it.
As a result of it (when we started a huge script), the script hit the API limit and broke a whole chain of the job, created a lot of unchained tasks, etc. (produced a lot of trash (unusable) data.)
As stated on the https://developer.atlassian.com/server/hipchat/hipchat-rest-api-rate-limits/ page, the limit is 500 requests per 5 minutes.
Is it possible to increase/change this limit?
Affected Jira: https://kruschecompany.atlassian.net/
For guests of this page:
JIRA API limitations:
--------------------------
If you are just an advanced user:
https://developer.atlassian.com/server/hipchat/hipchat-rest-api-rate-limits/
A: "add-on can make 500 API requests per 5 minutes"
--------------------------
If you are a developer:
https://developer.atlassian.com/cloud/jira/platform/rate-limiting
A: If you are an app developer - please, read this page from cover to cover.
--------------------------
Why this limitation exists:
https://jira.atlassian.com/browse/JRACLOUD-41876
A: "In some cases where users sending REST API request to run complex JQL queries with high frequency, it returns enormous number of results and it potentially causes an outage for the instance."
--------------------------
How to increase the API limitation:
A: Only developers can do this, but, as mentioned in the previous link - A large number of API requests can break your Jira instance. Therefore, it is better to get around this by dividing the processes by time.
--------------------------
I want to break my Jira instance! How to increase the API limitation?
A:Purchase an Atlassian license with direct access to the support team and try to persuade them to disable the limitation exactly on your instance.
There is of course the other option. Buy Jira Data Centre.
Your Jira instance will be all yours and you're welcome to do whatever you want with it. Set whatever rate limits you want. Buy as much performance as you want. Go right ahead and break it you want. You're not sharing with anyone and you affect nobody but yourself.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Sigh! Another week and another person asking the same old "Can I increase the API limit" question.
The answer is always the same, @Ivan Nevmerzhytskyi... no, you cannot increase the API rate limits.
What would be the point of having a limit if it could be exceeded? That would defeat the purpose of the limit being there.
The API limits are there for a very good reason... you're on a CLOUD platform, which means you have to share the resources with the other tenants on the platform and use the resources responsibly and fairly. Follow the instructions in the documentation on how to know when you're approaching the limit, back off your request or break it into batches, add some delay / jitter and you'll be fine.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
Thank you for your answer!
I see what you mean, but I'm not entirely convinced.
I know that the delays in the code may resolve my problem, but as you stated before, I'm on a CLOUD platform.
Cloud it's about flexibility, the place where you pay another 4.99 per/mon for upgrading something to the next subscription level.
Question to you: when your Google Drive free account gets full (15GBs), what will you do, register a new (second) account or purchase additional 100 GBs?
Exactly, it's not a Bummer time now.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Ahh, @Ivan Nevmerzhytskyi . You've undermined your argument by presenting a false analogy that just illustrates that you didn't understand the problem :) The REST API rate limits aren't a constraint on capacity, they're a constraint on bandwidth.
Yes, buying more bytes (individual capacity) with Google Drive increases your total capacity, but the performance of the individual bytes didn't change (individual bandwidth), so your actions had no effect on total bandwidth. Moving a large amount of data to Google Drive will still take a longer amount of time than moving a small amount of data.
When you buy more user accounts with Atlassian, you just buy more total capacity, but no individual user's performance changes. Individual bandwidth stays exactly the same.
The problem. A large amount of data needs to be processed, but you are constrained to processing that data with a bandwidth limit of a single user account. This results in the solution taking a long time.
A smart developer will make their solution work with the bandwidth limit and just accepts that the amount of time it takes to process is an inevitable outcome.
A lazy developer will try to change the bandwidth limit to work with their solution, and won't accept the inevitable processing time outcome.
A really smart but equally lazy developer would see that they could split the workload into batches, run them concurrently using a different user account per batch, thereby drastically increasing the total bandwidth and also reducing the total processing time! WOOHOO!! But.... they also know there are other inevitable outcomes of such a solution.
So. Which type of developer do you want to be Ivan?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hey,
I hope u are well @Sunny Ape
So, I guess my simple example was pretty unclear.
Ok. Another example:
AWS S3 Bucket
You may pay for the amount of data (disk size), and you may pay for the amount of data that was transferred (upload-download - network traffic).
In this case, the (REST API limitation) is the problem is not in the size (amount (disk size) of data (storage-upload-download-network traffic).
The problem is the number of attempts of requests.
EG>
1) I need to upload-download a file (size 2GB).
Jira: NP, bro - do it.
What happened: was used 1 API request, + was used 2 GBs of the traffic.
2) I need to upload-download 10 duplicate files in a row by different requests (size 2GB).
Jira: NP, bro - do it.
What happened: was used 10 API requests + were used 20 GBs of the traffic.
3) I need to upload-download 501 files with a size of 1 byte.
Jira: NP, bro - do it, but only FOR FIRST 500. One more you may get after 5 min timeout.
What happened: was used 500 API requests, and 1 API request was over limited + were used ONLY 500 bytes (0,000500 MBs) of the traffic.
As you see, the real problem is not bandwidth. The real problem is the number of attempts (requests) for the download.
--------------------------
What's going wrong in my situation?
In this case, if you are familiar with the JWME, you should know that the JWME code is pretty limited (eg, in the cloud version of Jira more than in the DC version).
Thats why sometimes, when you need to modify something that should be edited very easily (by logic), you should request a weather forecast on Mars, then send it to NASA, and just then, you will be able to do what you need (this is a joke. Unfortunately, I can't provide a real code).
Ok, here is a more real situation:
I have 600 Jira tasks.
I need to update only ONE field in every task.
But I can't do it in one attempt because I have a limitation of 500 API requests per 5 min.
So, I need to process the first 500 now and 100 after 5 min timeout.
--------------------------
I hope now my examples are more clear than before.
I understand that this is probably a kind of DDoS protection (or mb it is a real limitation of the cloud VMs (but wtf, just purchase more performance instances).
Currently, the issue is "bypassed" by reducing the number of tasks that should be processed by one time (in a row - one by one).
More than that.
I needed to process it only ONE time fully, and then the loading will be reduced.
But, ok. Instead of a few dollars (increasing instance performance), it was used for team time (with a larger amount of dollars).
As I said before, the issue will be resolved through money, but the question was - who will get it?
ps>
split the workload into batches, run them concurrently using a different user account per batch
- API limit per instance (not per user or batch).
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.