Table ‘sessions’ crashes often in OJS 3.1.2.1

Dear PKP community:

We are wondering if someone can give us a piece of advice about this database issue, related with the table “sessions”. It has been increasing in size and the table crashes after a while. Then, it is impossible to repair or optimize it DB Error: Table './db_ojs2017/sessions' is marked as crashed and should be repaired.

We have been monitoring the table sessions for a while with these results:

Day: size, overhead

Day 1: 798.3 KB, 216 B

Day 2: ——

Day 3: 63.5 MB, 6.8 MB

Day 4:

Day 5: 67.5 MB, 7.6 MB
Pasted%20Graphic%201

¿Is that an expected behavior for a standard OJS 3.1.2.1 - MySQL instance?

¿What could we do to avoid this increase in that table in the future?

The solution that we have implemented for now is to clean up the table more often, but ¿What are the implications of clean that specific table daily, for example?

Finally, some useful data:

OJS version: 3.1.2.1

MySQL version: 5.7.24

PHP version: 7.2.19

Ubuntu 14.04.5

These are our security options:

force_ssl = Active

force_login_ssl = Active

session_check_ip = Active

encryption = sha1

reset_seconds = 7200

I would really appreciate your comments about this matter

Regards

Arley

Hi Arley,

It’s a little weird that you’re getting to 41k session records in just a few days, but are you getting that much traffic? Do you find yourself getting lots of new registrations or crawling by bots? If so, do you limit crawling with a robots.txt file?

In any case, we have journals with session tables that contain 10k records routinely, and we’ve never had problems with the table crashing. We do use Innodb as a table type, if that’s also an option. Innodb provides row-level locking which can prevent SQL queries from stacking on top of each other.

Cheers.
Jason

Thanks for your reply, Jason

I agree, It is weird.

Well, according to Analytics, we are having around 50.000 sessions per month (around 1.500 per day). So, to be honest I don’t know if that is much traffic or not compared to other journals :slight_smile:

Regarding the registrations, I am not monitoring them but It seems to be normal. The only thing is that we do not limit crawling… any idea about what parameters to include in the robots.txt?

Thanks to your suggestion, we will change to innoDB to see if this fix the issue.

Finally, is it OK if we clean the table daily? Does this change cause any trouble with users or statistics?

Hi Arley,

That’s a fair bit of traffic. If the IPs are all different you’ll probably see a fair number of sessions in the sessions table, and cleaning it is fine, so long as you’re only removing sessions that are expired (i.e. older than your 7200 second limit).

We limit crawling using a robots.txt file with:

User-agent: *
Disallow: /cache/
Crawl-delay: 60

in it. That slows bots down a bit and helps with load. You’ll still get sessions but maybe this will help with the load average and give MySQL more resources to work with.

Cheers
Jason

1 Like

That is a wonderful advice, thanks a lot.

1 Like