Ojs in docker is really slow, high CPU usage in MariaDB

We are using GitHub - pkp/docker-ojs: Open Journal Systems (OJS) is a journal management and publishing system. to run OJS in docker. This works quite well, but there is this one large performance issue: High CPU load (up to 100%) in the MariaDB container, causing in extremely slow performance. RAM is okay, though. We use Ubuntu as Docker host.

Article landing pages perform okay, but all listings (archive, issue toc) are slow. For example, an issue toc with 48 articles takes 35 seconds to load.

This only appears on a docker-based database install, not with local MariaDB.

I’m aware that this might not be an OJS-only issue, but maybe someone has had similar problems and found a solution?

What I tried so far: run MariaDB optimizer script and tweaked a few config settings, checked the I/O scheduler config to make sure it’s not a docker/volume disk access issue, added a second virtual CPU. From there onwards, lots of blogposts and forums point out that if a SQL database goes up to 100% CPU, usually the queries can be optimized - which doesn’t seem to help me, since OJS runs OK without docker.

Any suggestions on how to get OJS in Docker running smoothly would be great.

Hi @ojsbsb

I’m running 50 dockerized ojs over a single iron with this same images and it works fine.
In past I got performance issues due memory usage because OJS3 still have a couple cpu/memory intense queries that can be a bottle neck but but don’t think it could be a problem with a single site.

My installation runs in a physical machine, but recently I moved it all to a VM (8CPUs / 32GB) and I got problems due slow disk access. I removed the swap memory and speed up the disk and all worked fine again.

How much memory did you assign to this VM? Did you test your disk speeds?

BTW, I’m building those images (we are in a refactoring process right now), so any feedback and comment to improve them is always welcome.

Cheers,
m.

1 Like

Thanks for your reply, @marc!

I tested the MariaDB docker’s disk speed, it’s on average about 0,2 GB/s slower than on the machine itself:

root@[container]:/# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=25000 && sync"; rm ddfile
25000+0 records in
25000+0 records out
204800000 bytes (205 MB, 195 MiB) copied, 0.165209 s, 1.2 GB/s

This is the memory situation of the host. Do you recommend changes? I read you use 32 GB, but for 50 dockerized journals…

[user]@[host]:~$ sudo free -h
              total        used        free      shared  buff/cache   available
Mem:          3.8Gi       2.5Gi       360Mi        43Mi       986Mi       1.0Gi
Swap:            0B          0B          0B

Anyway, besides this performance issue, we were really happy about the OJS docker image! We think it is well documented and the repo is nicely structured - so thank you a lot for your work! :+1:

Your disc is faster than mine, so let’s assume this is not the problem.
My disc is slow, so I also removed swap to avoid bottlenecks.

If it is possible to you, try extending RAM to something huge (16GB?) to be completely sure this is not the problem (if we found this is the culprit, we can shrink it later).

I’m quite happy with the images but is a vanilla mariadb so it could probably be optimized for docker/OJS usage… but as I said, we are thinking in a deep refactoring, so it need to be done when we stabilize the new image.

This is solved now, by the way. I can’t really say why, but after a number of changes and experiments on the host - and using newer image versions in the meantime - the speed is just as expected.

1 Like

Now that is still more or less fresh, do you mind to post a list of the experiments you made.
Just in case others fall in same hole.

Take care,
m.