Usage log parsing and performance

Hi PKP, I’ve been stuck on an issue that’s causing a major headache for our system admins. The log file processing on our OJS server often kills our DB cluster especially if there are a few OJS instances running their scheduled tasks at around the same time, which is often the case with the ACRON plugin.

My question is, what’s the best way to stagger the time that ACRON runs the scheduled tasks? Can I just edit the value in the last_run column in the schedule_tasks table? I know I can also use crontab and manually set up the cron jobs, but with 14 independent instances, it’s a pain to know who has what plugin enabled at any given time. I know there are plans to add the tasks in the DB to the tools/runScheduledTasks.php script in 3.2, but until then I’d prefer to use ACRON .

The error we get is:

2018-07-10 15:36:12 140421807266560 [Note] The slave is applying a ROW event on behalf of a DELETE statement on table usage_stats_temporary_records and is currently taking a considerable amount of time (61 seconds). This is due to the fact that it is scanning the table while looking up records to be processed. Consider adding a primary key (or unique key) to the table to improve performance.

Would adding a primary key to the usage_stats_temporary_records table potentially solve this issue?

This is the query that’s causing everything to hang:

-- Connection Id: 1553702
-- User: *********
-- Host: *********
-- DB: *********
-- Command: Query
-- Time: 318
-- State: query end
DELETE from usage_stats_temporary_records
            WHERE assoc_type = 256 AND assoc_id = 6 AND entry_time = 1531164750 AND load_id = 'usage_events_20180709.log'

Hi @bart,

I would suggest disabling the ACRON plugin and using conventionally scheduled jobs instead (e.g. with CRON).

Alec Smecher
Public Knowledge Project Team