Hi everyone,
we've just got notification from nagios that our server with Jira 7.12.1 has running tomcat8.exe service at 99%. I have tried to log in Jira, everything works, but its very slow.
It happned all of sudden, Jira was running few months without problem.
I looked up the logs and only thing that was happening around the time the CPU went up was synchronization of LDAP but it was completed.
Also around that time run InstallGlancesJobHandler
2019-11-14 10:08:49,203 Caesium-1-2 INFO ServiceRunner [c.a.j.p.h.service.connect.InstallGlancesJobHandler] Running InstallGlancesJobHandler... 2019-11-14 10:08:49,203 Caesium-1-2 INFO ServiceRunner [c.a.j.p.h.service.connect.InstallGlancesJobHandler] There is no link to HipChat, no need to install glances. 2019-11-14 10:08:54,460 Caesium-1-1 INFO ServiceRunner [c.a.j.p.h.service.ping.RefreshConnectionStatusJobHandler] Running RefreshConnectionStatusJobHandler...
Dont know if thats something in common with our issue.
Jira is working, but really slow, we are going to give it a restart but its our production enviroment with customers so we cannot restart it any time of a day..
Has anyone encounter with similar behaviour?
Thanks alot
Hello Vickey, thanks for your answer,
just to let you know, just few minutes ago tomcat8.exe goes to 25-40% CPU. It just fixed itself, and Jira now run normally as usual. I have no idea what did just happend, me or my colleague did nothing.
I have already announce that we are going to restart Jira and Server tonight out of business hours.
No new plugins were installed lately. (Jira has 100+ days uptime, without any problems)
We have 8GB of RAM
jvm allocation is set to JvmMx 3072 by registry
(500 users, 100000+ issues)
Memory allocations seem fine to me too.
I'd suggest JIRA restarts at certain intervals you know - it would be a good thing to do.
Maybe once in two weeks or something .
Check your DB Monitoring screen during 99% usage time just to check if DB connections show anything.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Yeah, I guess we have to do that atleast once a month.
During the issue, there were lotta movement but i couldnt connect it to something specific.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Why does that spike look so much like issue reindexing ?
Was somebody running a JIRA reindexing or a project reindexing ?
Did somebody run an integrity check ?
I'd say - dig deeper in your log file , catalina.out.
The timeframe looks huge so I'm sure you will be able to connect the dots using the timeline.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Vickey,
thanks again for your fast anwsers and kind approach.
That was the first thing i thought but reindexing of Jira nor project reindexing was not running at the time of the first spike.
Integrity check was not running aswell.
I will have to look up the logs more deeply as you said.
------
Just to know, last night we restarted Jira and Windows Server run full reindex and everything went fine, I cleared out the logs so (backup the old ones). But for now its running really smoothly so I guess Jira needs to rest a little bit after that 100+ days uptime.
So thanks again, have a nice weekend and lets hope it wont happen again.
Tom
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Have you installed and new plugins lately ?
What is the memory allocation in your server ?
What is the jvm allocation your server ?
You may want to investigate further .
There could be so many reasons and based on that you may want to take corrective measures ?
I will start with JVM allocation.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.