Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Idle Confluence instance causes constant write load on disk/SSD

Patrick M. Hausen June 12, 2020

Hi all,

I run a small self hosted Confluence 7.5.1 instance. 10 user license, not much load.

My system monitoring sees a constant write load of about 600k/s to disk with just Confluence and MySQL running and definitely no active users.

What can I check to find the cause of this?

Operating system is Ubuntu 20.04 server, no graphical user interface, just this one application.

Thanks,
Patrick

Bildschirmfoto 2020-06-12 um 12.21.31.png

2 answers

0 votes
Patrick M. Hausen June 12, 2020

The instance is perfectly accessible and performing well. This is not a performance issue. It is just that this server is writing to the SSD 24x7 which will possibly wear it out sooner than desired.

An idle application should not continuously write apart from some bookkeeping every couple of minutes or so, shouldn't it?

I'll try the suggested diagnostics. Thanks.

repi
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
June 12, 2020

Which apps are installed?
Are there any settings regarding the configuration of the log file that cause the confluence to write detailed logs?
Is the log level production or diagnostic?

Patrick M. Hausen June 12, 2020

1.

  • Confluence Source Editor
  • Polls for Confluence
  • Team Calendars

2. not that i knew

3. production

Patrick M. Hausen June 12, 2020

OK, iotop tells me it's the database (mysqld) that is writing, so I enabled the SQL query log for a short period of time. I get a bunch of these:

2020-06-12 16:02:14,815 ERROR [Caesium-1-2] [migration.agent.queue.QueueBroker] error An error occurred when getting the next batch for consumer type: CONFLUENCE_IMPORT. Message: javax.persistence.PersistenceException: Failed to update database schema

com.atlassian.util.concurrent.LazyReference$InitializationException: javax.persistence.PersistenceException: Failed to update database schema

at com.atlassian.util.concurrent.LazyReference.getInterruptibly(LazyReference.java:149)

at com.atlassian.util.concurrent.LazyReference.get(LazyReference.java:112)

at com.atlassian.migration.agent.store.jpa.impl.DefaultSessionFactorySupplier.get(DefaultSessionFactorySupplier.java:65)

at com.atlassian.migration.agent.store.jpa.impl.DefaultSessionFactorySupplier.get(DefaultSessionFactorySupplier.java:40)

at com.atlassian.migration.agent.store.jpa.impl.DefaultPluginTransactionTemplate.on(DefaultPluginTransactionTemplate.java:29)

at com.atlassian.migration.agent.store.tx.PluginTransactionTemplate.write(PluginTransactionTemplate.java:24)

at com.atlassian.migration.agent.queue.QueueBroker.getNextBatch(QueueBroker.java:119)

at com.atlassian.migration.agent.queue.QueueBroker.dispatchBatchToConsumer(QueueBroker.java:113)

at java.base/java.util.ArrayList.forEach(Unknown Source)

at com.atlassian.migration.agent.queue.QueueBroker.runJob(QueueBroker.java:100)

at com.atlassian.confluence.impl.schedule.caesium.JobRunnerWrapper.doRunJob(JobRunnerWrapper.java:117)

at com.atlassian.confluence.impl.schedule.caesium.JobRunnerWrapper.lambda$runJob$0(JobRunnerWrapper.java:87)

at com.atlassian.confluence.impl.vcache.VCacheRequestContextManager.doInRequestContextInternal(VCacheRequestContextManager.java:84)

at com.atlassian.confluence.impl.vcache.VCacheRequestContextManager.doInRequestContext(VCacheRequestContextManager.java:68)

at com.atlassian.confluence.impl.schedule.caesium.JobRunnerWrapper.runJob(JobRunnerWrapper.java:87)

at com.atlassian.scheduler.core.JobLauncher.runJob(JobLauncher.java:134)

at com.atlassian.scheduler.core.JobLauncher.launchAndBuildResponse(JobLauncher.java:106)

at com.atlassian.scheduler.core.JobLauncher.launch(JobLauncher.java:90)

at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.launchJob(CaesiumSchedulerService.java:435)

at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeLocalJob(CaesiumSchedulerService.java:402)

at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeQueuedJob(CaesiumSchedulerService.java:380)

at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeJob(SchedulerQueueWorker.java:66)

at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeNextJob(SchedulerQueueWorker.java:60)

at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.run(SchedulerQueueWorker.java:35)

at java.base/java.lang.Thread.run(Unknown Source)

Caused by: javax.persistence.PersistenceException: Failed to update database schema

at com.atlassian.migration.agent.store.jpa.impl.LiquibaseSchemaUpgrader.upgrade(LiquibaseSchemaUpgrader.java:35)

at com.atlassian.migration.agent.store.jpa.impl.DefaultSessionFactorySupplier.buildSessionFactory(DefaultSessionFactorySupplier.java:70)

at com.atlassian.util.concurrent.Lazy$Strong.create(Lazy.java:85)

at com.atlassian.util.concurrent.LazyReference$Sync.run(LazyReference.java:325)

at com.atlassian.util.concurrent.LazyReference.getInterruptibly(LazyReference.java:143)

at com.atlassian.util.concurrent.LazyReference.get(LazyReference.java:112)

at com.atlassian.migration.agent.store.jpa.impl.DefaultSessionFactorySupplier.get(DefaultSessionFactorySupplier.java:65)

at com.atlassian.migration.agent.store.jpa.impl.DefaultSessionFactorySupplier.get(DefaultSessionFactorySupplier.java:40)

at com.atlassian.migration.agent.store.jpa.impl.DefaultPluginTransactionTemplate.on(DefaultPluginTransactionTemplate.java:29)

at com.atlassian.migration.agent.store.tx.PluginTransactionTemplate.read(PluginTransactionTemplate.java:16)

at com.atlassian.migration.agent.service.impl.DefaultStatisticsService.loadSpaceStatistics(DefaultStatisticsService.java:138)

at com.atlassian.migration.agent.service.impl.DefaultStatisticsService.calculateServerStats(DefaultStatisticsService.java:167)

at com.atlassian.migration.agent.service.impl.SingleJobExecutor.lambda$execute$0(SingleJobExecutor.java:33)

at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source)

at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

... 1 more

Caused by: liquibase.exception.LockException: Could not acquire change log lock.  Currently locked by 2003:a:d59:3800:2a0:98ff:fe4f:2112%enp0s4 (2003:a:d59:3800:2a0:98ff:fe4f:2112%enp0s4) since 11/6/19 12:12 PM

at liquibase.lockservice.StandardLockService.waitForLock(StandardLockService.java:230)

at liquibase.Liquibase.update(Liquibase.java:184)

at liquibase.Liquibase.update(Liquibase.java:179)

at com.atlassian.migration.agent.store.jpa.impl.LiquibaseSchemaUpgrader.upgrade(LiquibaseSchemaUpgrader.java:30)

... 16 more
repi
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
June 12, 2020

Possibly deactivate the three apps and check them out!

Do the write processes also come from confluence?

Patrick M. Hausen June 12, 2020

I found this document about the error message in the log:
https://confluence.atlassian.com/confkb/confluence-upgrade-failed-with-liquibase-exception-lockexception-could-not-acquire-change-log-lock-error-969527207.html?_ga=2.129126724.991102111.1591952157-1391091463.1591718810

Disabled the Cloud Migration Assistant, unfortunately that did not stop the writing.

Deactivated all my apps, too - no change :-/

If I `systemctl stop confluence` the writes drop to zero. So, yes it is the confluence application that causes this although I do think the writes go through the database, not directly to files like in the case of debug logs or similar. I checked the <confluence-install>/logs as well as the <application-data>/logs - nothing unusual or large in there.

repi
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
June 12, 2020
Like # people like this
0 votes
Amith Mathur {Appfire}
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
June 12, 2020

Hi @Patrick_M__Hausen , 

Welcome to Atlassian Community!

There will be many reasons for these types of performance related issues. Is the instance accessible at those times?  Is it happening at any particular time?  You can go through the below documents to get to the root cause for it. 

Ensure that the backups and any scheduler/scripts are disabled. 

 

Thanks,
Amith Mathur

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
SERVER
VERSION
7.5.1
TAGS
AUG Leaders

Atlassian Community Events