OJS 3.2: Database takes to long

Hello,

We’ve been seeing that our database takes too long to make a response to some querys like this one:

SELECT 		o.submission_id,

						MAX(s.context_id) AS journal_id,

						MAX(i.date_published) AS i_pub,

						MAX(p.date_published) AS s_pub,

					COUNT(*) AS count

				FROM

			submissions s

		JOIN publications p ON (p.publication_id = s.current_publication_id)

	JOIN publication_settings ps ON (ps.publication_id = p.publication_id AND ps.setting_name='issueId')

	JOIN issues i ON (CAST(i.issue_id AS CHAR) = ps.setting_value)

	JOIN submission_search_objects o ON (s.submission_id = o.submission_id)

NATURAL JOIN submission_search_object_keywords o0 NATURAL JOIN submission_search_keyword_list k0

WHERE

s.status = 3 AND

i.published = 1 AND k0.keyword_text = 'los' AND i.journal_id = 7

GROUP BY o.submission_id

ORDER BY count DESC

LIMIT 500

The system is taking from 20 to 30+ seconds to respond, and sometimes it even hangs and the OJS stops working for more than 3 minutes.

Is there a way to make the database work faster?

best regards,

Juan

Hi @Juan_Lopez,

This is a full-text search. What do you get when you run EXPLAIN on this query (in MySQLAdmin or another MySQL client)?

Regards,
Alec Smecher
Public Knowledge Project Team

The database is a lot of the same query… here is the Explain of one of those query’s:

image

and these are the querys blocked because of the number of querys waiting for a response:

image

Hi @Juan_Lopez,

If you’re seeing a lot of simultaneous queries like this, it’s possible that a bot is hitting the search page and causing your database to bog down. I’d suggest looking through your request log to see if you can spot the corresponding “search” requests. If there appear to be an unreasonable number coming e.g. from one or two sources, you may need to investigate that.

Regards,
Alec Smecher
Public Knowledge Project Team

Hi,

We’ve been keeping an eye on the logs and this are the type of errors that keeps showing:

Tue Oct 06 16:09:05.156058 2020] [proxy_fcgi:error] [pid 242168:tid 139804759004928] (70007)The timeout specified has expired: [client 172.16.29.2:10353] AH01075: Error dispatching request to : (polling)

[Tue Oct 06 16:09:06.090556 2020] [proxy_fcgi:error] [pid 242168:tid 139805153580800] (70007)The timeout specified has expired: [client 172.16.29.2:10380] AH01075: Error dispatching request to : (polling)

[Tue Oct 06 16:09:14.932060 2020] [proxy_fcgi:error] [pid 241441:tid 139804054378240] (70007)The timeout specified has expired: [client 172.16.29.2:10546] AH01075: Error dispatching request to : (polling), referer: https://www.google.com/

[Tue Oct 06 16:09:15.716093 2020] [proxy_fcgi:error] [pid 241225:tid 139805187151616] (70007)The timeout specified has expired: [client 172.16.29.2:10557] AH01075: Error dispatching request to : (polling), referer: https://www.google.com/

[Tue Oct 06 16:17:48.320645 2020] [proxy_fcgi:error] [pid 241224:tid 139803517507328] (70007)The timeout specified has expired: [client 172.16.29.2:19185] AH01075: Error dispatching request to : (polling), referer: https://scholar.google.es/

[Tue Oct 06 16:18:39.890542 2020] [proxy_fcgi:error] [pid 241224:tid 139804020807424] (70007)The timeout specified has expired: [client 172.16.29.2:20029] AH01075: Error dispatching request to : (polling), referer: https://www.google.com/

[Tue Oct 06 16:18:44.722566 2020] [proxy_fcgi:error] [pid 241225:tid 139804759004928] (70007)The timeout specified has expired: [client 172.16.29.2:20137] AH01075: Error dispatching request to : (polling), referer: https://scholar.google.es/

[Tue Oct 06 16:20:10.829394 2020] [proxy_fcgi:error] [pid 241224:tid 139804591249152] (70007)The timeout specified has expired: [client 172.16.29.2:21728] AH01075: Error dispatching request to : (polling)

Hi @Juan_Lopez,

That’s a web server (fcgi) message, not an OJS message, but it indicates that your pool of PHP processes can’t keep up with the number of requests. Since several of these reference Google, you should be able to slow down indexing using robots.txt.

Regards,
Alec Smecher
Public Knowledge Project Team

Hi @asmecher

We did as you said, however, the problem persist after 10 days of following your instructions…

Is it posible that something else might be happening?

The same error keeps showing:

image

and our firewall is not blocking any conection or something likewise

Hi @Juan_Lopez,

Did you look through the access log for evidence of aggressive indexing of the search page? If that’s still happening, you’ll need to throttle it using e.g. robots.txt.

Regards,
Alec Smecher
Public Knowledge Project Team

Yes the access log seems fine, however, this error is thrown when we:

  1. Try to make an adjustment to an article,
  2. Try to submit a new article.
  3. Try to make a change on the journal.
  4. Try to activate a DOI.
  5. Try to make a submission via QuickSubmit

Everything that has the database involved, is throwing this error. :confused:

Hi @Juan_Lopez,

Are you still seeing large numbers of blocking queries as in your screenshots above?
Which storage engine are you using – MyISAM, or InnoDB?

Regards,
Alec Smecher
Public Knowledge Project Team

Our Tech team is asking if there are a “best practice” documentation about the OJS 3.2 configuration, due to the facts that:

  1. the table submission_search_object_keywords It has more than 21 million records of which it stores all the possible words by which they can make a query, for example “the”, etc …, of which when applying the where for these common words brings between 300 thousand to 2,000 records all depending on how common the search word is, I do not know if the community can ask how to narrow down the search criteria since as I mentioned the table has a considerable volume of records and when performing a query by a value as common as “the”, makes the query not as optimal as you would expect.

image

  1. On the other hand and not least, it is observed that the database does not have a MER any relationship between the tables, although the tables have created the keys and indexes, for a database the relationships are essential, otherwise the The performance of the queries will be greatly affected, even more so when functions such as the natural join are used (for example, the expensive query that is identified uses it 2 times), since this function (natural join) identifies the relationships between tables and crosses them by the constraints created without having to specify in the query which are the keys, I would appreciate to escalate this issue to the community and know how they have mitigated it.

image

Our monitoring system shows the following about the database:

image

Time in minutes of the duration of the mentioned query.
image

*************************************************************************************************

*************************************** initial query ********************************************

*************************************************************************************************

 

SELECT

                               o.submission_id , 

                               MAX ( s.context_id ) AS journal_id , 

                               MAX ( i.date_published ) AS i_pub , 

                               MAX ( p.date_published ) AS s_pub , 

                               COUNT(*) AS count 

FROM submissions s 

                               JOIN publications p  ON ( p.publication_id = s.current_publication_id ) 

                               JOIN publication_settings ps ON ( ps.publication_id = p.publication_id  AND ps.setting_name = ? ) 

                               JOIN issues i  ON (CAST(i.issue_id AS char) = ps.setting_value ) 

                               JOIN submission_search_objects o  ON ( s.submission_id = o.submission_id )

                               NATURAL JOIN submission_search_object_keywords o0

                               NATURAL JOIN submission_search_keyword_list k0 

WHERE s.status = ? 

                               AND i.published = ? 

                               AND k0.keyword_text = ? 

                               AND i.journal_id = ? 

GROUP BY o.submission_id 

ORDER BY count DESC LIMIT ?;

 

*************************************************************************************************

************************************* example with params ***************************************

*************************************************************************************************

SELECT

                               o.submission_id,

                               MAX(s.context_id) AS journal_id,

                               MAX(i.date_published) AS i_pub,

                               MAX(p.date_published) AS s_pub,

                               COUNT(*) AS count

FROM

                               submissions s

                               JOIN publications p ON (p.publication_id = s.current_publication_id)

                               JOIN publication_settings ps ON (ps.publication_id = p.publication_id AND ps.setting_name='issueId')

                               JOIN issues i ON (CAST(i.issue_id AS CHAR) = ps.setting_value)

                               JOIN submission_search_objects o ON (s.submission_id = o.submission_id)

                               NATURAL JOIN submission_search_object_keywords o0

                               NATURAL JOIN submission_search_keyword_list k0

WHERE

                               s.status = 3 AND

                               i.published = 1 AND k0.keyword_text = 'los' AND i.journal_id = 7

GROUP BY o.submission_id

ORDER BY count DESC

LIMIT 500

Hope you guys can help :pray:

Hi @asmecher,

Is there something we can do about the last message that I posted?

Sorry to bother you.

My tech team just told me that the engine used is MyISAM, is it possible to do the following?

  1. To eliminate the locks per table and optimize the memory and processor resources of the server, it is proposed to change from engine to InnoDB at least the tables that intervene in the mentioned query (I say at least because the performance improves substantially when handling this type de engine and we could evaluate to make the change on all the tables in the database).

  2. This engine change was made in the test environment and the query execution time was below a few seconds, additionally when converting it to InnoDB it was possible to identify that 2 important indexes were needed, with which in the test environment they were achieved times of milliseconds, the indexes created were as follows:

    ALTER TABLE ojs3.submission_search_objects

    ADD INDEX submission_id_idx (submission_id ASC);

    ALTER TABLE ojs3.submissions

    ADD INDEX current_publication_id_idx (current_publication_id ASC);

Because this changes were made in the test environment we would like to know, if we make this change on production are we going to have any kind of problem with the platform?

… Sorry for the long response, but you guys are our only hope :pray:

Thank you very much for your time!

1 Like

Hi @Juan_Lopez,

Whoops, yes, those indexes should definitely be added! I’ll double-check our codebase to see if I can spot why they are missing. But yes, I definitely do recommend using InnoDB in general, and adding those indexes will not cause problems. I’ve posted this at Add indexes for search performance · Issue #6301 · pkp/pkp-lib · GitHub and please watch for me to add SQL there to generate the indexes – I’d recommend following the index naming convention I use there.

Regards,
Alec Smecher
Public Knowledge Project Team

2 Likes