Welcome to WebmasterWorld Guest from 54.224.91.58

Forum Moderators: open

Message Too Old, No Replies

How to diagnose severe slow-down caused by a table?

The whole site becomes unusable after accessing one script

     
6:28 pm on Jul 4, 2016 (gmt 0)

Preferred Member from GB 

10+ Year Member Top Contributors Of The Month

joined:July 25, 2005
posts: 402
votes: 16


Hi,

I've got a puzzling error here. I'm about to finish building a large toolset containing of approx 100 php files - each heavily relying on MySQL operations.

All scripts are using a similar architecture and I've never had any issues. One of the scripts suddenly started acting weird.

Once I've accessed this script, the current session becomes unusable - in other words, I cannot access any other script on the server. This behaviour I've never experienced before. Normally you get errors, notices or timeouts. In this case the browser progress bar just goes round and round forever. I don't even get a timeout. The browser console doesn't show me anything.

The only clue I get is these lines in the raw error log:

[Thu Jun 30 08:42:19 2016] [error] [client *.*.92.176] File does not exist: /home/example/public_html/500.shtml, referer: example.com/script.php

So it's throwing a 500 error, which is probably the most unhelpful error code ever.

The reason why I've narrowed it down to a database issue is that if I run a very simple php file that doesn't rely on the db, it works completely fine.

To get the tools to work again, I have to delete my cookies and flush my DNS. Then in this fresh session, if I avoid hitting the faulty script, everything works well. Then I click on this script again, and all the other scripts are "contaminated" with the same error.

The faulty script is referencing the biggest table in my system, which has... wait for it... drumroll... 40,000 rows of data. And that's just me using it. I expect it to have well over 5 million rows at any point. I have worked with huge tables before and I've never had any issues.

My main problem is that I don't even understand the cause of this error and I have no idea how to diagnose this.

Help please!
6:47 pm on July 4, 2016 (gmt 0)

Full Member

5+ Year Member

joined:Aug 16, 2010
posts:251
votes: 20


You could start by enabling the MySQL slow log and see if a sql query causes this problem.

Specially with myisam tables there can be a deadlock where a long select query in combination with an update query locks the system.
6:49 pm on July 4, 2016 (gmt 0)

Senior Member from US 

WebmasterWorld Senior Member tangor is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Nov 29, 2005
posts:8167
votes: 609


Have you debugged the offending script? One method is to insert break points in the script (if this completed, show this code) and see where it breaks down. Also check your calls and any joins to the database.
8:04 pm on July 4, 2016 (gmt 0)

Preferred Member from GB 

10+ Year Member Top Contributors Of The Month

joined:July 25, 2005
posts: 402
votes: 16


Thank you for the suggestions.

@bhukkel, I don't think I can do that. I get: #1227 - Access denied; you need (at least one of) the SUPER privilege(s) for this operation.

btw. I changed the engine from myisam to innodb in a hope this would solve the issue or at least throw a different kind of error but there was no change.

@tangor, I have and I think it breaks down around this line:

DELETE t1 FROM my_table t1 INNER JOIN my_table t2 ON t1.t_id = t2.t_id WHERE t1.id < t2.id AND t1.owner_id = '10';

it's an inner join that deletes duplicate older entries. I've used that quite a lot in the other parts of the script.

[edited by: adder at 8:31 pm (utc) on Jul 4, 2016]

8:21 pm on July 4, 2016 (gmt 0)

Senior Member from GB 

WebmasterWorld Senior Member andy_langton is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Jan 27, 2003
posts:3332
votes: 140


Once I've accessed this script, the current session becomes unusable - in other words, I cannot access any other script on the server.


If your PHP script starts a session, and uses file-based sessions, then you may be running into an issue where the session file is locked, and scripts will be processed sequentially, once the lock on the session has ended for the active script. So, you'll have to wait for the broken script to be unlocked before accessing anything else. You can explicitly session_write_close() if you want to avoid this behaviour (although it doesn't address your underlying problem).
8:34 pm on July 4, 2016 (gmt 0)

Preferred Member from GB 

10+ Year Member Top Contributors Of The Month

joined:July 25, 2005
posts: 402
votes: 16


@Andy Langton, thanks, it's a valid point, however I had copied session management from my older scripts and each file uses the exact same procedure for sessions. I did test the sessions just to be on the safe side, though.
9:48 pm on July 4, 2016 (gmt 0)

Senior Member

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month

joined:Sept 25, 2005
posts:1787
votes: 266


I have and I think it breaks down around this line:

Not a lot of certainty there :-) Insert as many checkpoints -- with output, e.g. echo __LINE__; -- as you can, then run the script from the command line so that you get direct feedback in your terminal until the script begins misbehaving. Normally I would recommend a profiler like Blackfire but I'm not sure that would be of much use if the script doesn't even finish or exit cleanly.
12:35 am on July 5, 2016 (gmt 0)

Senior Member

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month

joined:Aug 30, 2002
posts: 2627
votes: 96


Not exactly sure how the indexes are in the tables. It looks like there are some conditions that might require a full read of each table and this could result in a slow query. Even though the inner join is probably the correct way of doing things, the query could be rewritten as "delete from table1 using table1,table2 where <conditions>;". It is crude but effective. However it may not be of much use here. The other aspect might be in how MyISAM stores data and deletes data. It might be worth optimizing the tables periodically.

On slow query side, it might be running into the PHP execution time limit for queries. (Around 30 seconds may be a default.)

Regards...jmcc
7:26 pm on July 16, 2016 (gmt 0)

Preferred Member from GB 

10+ Year Member Top Contributors Of The Month

joined:July 25, 2005
posts: 402
votes: 16


Thanks everyone for the ideas. After disabling a few things that were running alongside, I confirmed the inner join was the problem. I don't know why. I can't find the mistake. So I've just rewritten the query to this:

DELETE t1 FROM my_table t1, my_table t2 WHERE t1.id < t2.id AND t1.t_id = t2.t_id AND t1.owner_id = '10';
7:53 pm on July 16, 2016 (gmt 0)

Senior Member from US 

WebmasterWorld Senior Member tangor is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Nov 29, 2005
posts:8167
votes: 609


Glad for the happy resolution. All too often it's a non-obvious typo or something like a malformed call that creates these maddening problems. Taking a new look, or trying it again usually solves it... even if we STILL can't pinpoint why it failed in the first instance.
8:57 pm on July 16, 2016 (gmt 0)

New User

joined:Nov 15, 2015
posts: 39
votes: 2


Hi, I am NO SQL expert but I was intrigued by the problem so I thought I'd take a look. It took me a moment to realise your SQL statement references the same table 'my_table'.

I'm just wondering if there's a problem deleting rows from a table that you have inner join'ed to itself? I'm not sure how MySQL handles the self joining in the background but I wonder if it has to rebuild any temporary data structures it may use because of the inner join each time it deletes a record?

Just a thought.
6:25 pm on July 27, 2016 (gmt 0)

Preferred Member from GB 

10+ Year Member Top Contributors Of The Month

joined:July 25, 2005
posts: 402
votes: 16


Ok, so I spoke too soon. The issue is back even with the new query. When I tested I had 50K rows and it worked fine so I assumed I had solved it. Now running the same query on 60K rows and the timeout is back.

Can anybody please suggest how to write a query for removing duplicate entries on large tables (millions of rows)?

@jbnz - yeah, I think you're right, it has to rebuild a temporary structure, however I have no idea how to do it more efficiently.
8:41 pm on Sept 19, 2016 (gmt 0)

Senior Member

WebmasterWorld Senior Member topr8 is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Apr 19, 2002
posts:3377
votes: 41


... did you consider the strategy suggested on the MySQL site [dev.mysql.com] ... insert all the rows you are NOT deleting into a new table, then rename the table to the old name.
 

Join The Conversation

Moderators and Top Contributors

Hot Threads This Week

Featured Threads

Free SEO Tools

Hire Expert Members