Hi All,
I recently carried out a deployment on a large-ish Jira instance (2000 users, 2,709,499 tickets), after deploying to the server I ran a full re-index which ran for 14.5hours and then failed.
After looking into the log I can see reference to an I/O error, as below:
2018-09-16 06:02:04,002 JiraTaskExectionThread-1 ERROR aaliomar 1191x190x1 emm4t4 10.18.71.79,10.46.183.112 /secure/admin/IndexReIndex.jspa [jira.util.index.CompositeIndexLifecycleManager] Reindex All In Background FAILED. Indexer: DefaultIndexManager: paths: [/data/atlassian/application-data/jira/caches/indexes/comments, /data/atlassian/application-data/jira/caches/indexes/issues, /data/atlassian/application-data/jira/caches/indexes/changes]
org.ofbiz.core.util.GeneralRuntimeException: Error getting the next result (I/O Error: Connection reset)
As the instance was required the next working day, I then kicked off a background re-index which then also failed the following day, from what I can see online this error seems to be something slightly different and is possibly caused by a known issue whereby a ticket has been modified and deleted during the background re-index, see log below:
[jira.util.index.CompositeIndexLifecycleManager] Reindex All In Background FAILED. Indexer: DefaultIndexManager: paths: [/data/atlassian/application-data/jira/caches/indexes/comments, /data/atlassian/application-data/jira/caches/indexes/issues, /data/atlassian/application-data/jira/caches/indexes/changes]
java.lang.NullPointerException
at com.atlassian.jira.issue.index.DefaultIssueIndexer$4.consume(DefaultIssueIndexer.java:340)
I have subsequently kicked off another background re-index, this time remove the delete issues permission across the instance temporarily, however this time I got the I/O connection reset error again!
Today I have kicked of ANOTHER background re-index to see if I get the same issue, but wondered if anyone had any other ideas I could consider? My thinking atm is that I seem to be struggling with resource on the hardware (which is shared for other things!) and that is killing the re-indexes.
Hi, Daniel.
For now, the Error getting the next result (I/O Error: Connection reset) error does not tell us the reason behind the indexing failure in a straight forward manner. So, may I know if we can gather the information of the log snippet before we run into the error to see what was the last action performed during the indexing process?
Additionally, I am also interested to understand your thoughts on the hardware resourcing since you mentioned that ti could be one of the possible reasons behind this issue. Can you share the resource allocated for the instance and is your JIRA instance deployed in a data center deployment since you have more than 2 million tickets? We can see this information in the following table (Small, Medium, Large):
Issues | up to 150,000 | 150,000 to 600,000 | 600,000 to 2,000,000 |
Projects | up to 200 | 200 to 800 | 800 to 2,500 |
Users | up to 1,000 | 1,000 to 10,000 | 10,000 to 100,000 |
Custom Fields | up to 250 | 250 to 800 | 800 to 1,800 |
Workflows | up to 80 | 80 to 200 | 200 to 600 |
Groups | up to 2,000 | 2,000 to 10,000 | 10,000 to 50,000 |
Comments | up to 250,000 | 250,000 to1,000,000 | 1,000,000 to4,000,000 |
Permission Schemes | up to 25 | 25 to 100 | 100 to 400 |
Issue Security Schemes | up to 50 | 50 to 200 | 200 to 800 |
Any metric that registers above the Large range is XLarge - for example, over 2,000,000 Issues or over 2,500 Projects.
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.