Forum Moderators: coopster
i dp have privmary key on table `post`
does 'Rows_examined: 771327' means mysqlserver read through those 771327 rows to get 30 rows?
it takes 14 seconds!
possible to show 'Rows_examined' in a explain or other commands? slow log is hard to debug..
and possible to optimize?
Yes.
771297 + 30 = 771327. I have a table with 634699 rows and ran a similar query,
SELECT * FROM table LIMIT 622122, 30;The query returned 30 rows in set (0.57 sec). If I run a
SELECT * FROM table;on the same table, it will return 634699 rows in set (8.23 sec). Now, it takes a lot longer than that to display them on my command line interface though ;)
I found this quote from the Documentation Team out there as well...
The slow query log also contains queries that don't use any index at
all to select rows from a table, which indicates a full scan. That's
probably why you're seeing those queries in the log.
What do you get when you
EXPLAIN SELECT * FROM `post` LIMIT 771297, 30;
id 1
select_type SIMPLE
table post
type ALL
possible_keys NULL
key NULL
key_len NULL
ref NULL
rows {$all_rows_of_post}
Extra
EXPLAIN SELECT * FROM `post` ORDER BY postdate LIMIT 771297, 30
id 1
select_type SIMPLE
table post
type ALL
possible_keys NULL
key NULL
key_len NULL
ref NULL
rows {$all_rows_of_post}
Extra Using filesort
EXPLAIN SELECT * FROM `post`
WHERE forumid=49
ORDER BY postdate LIMIT 771297, 30
id 1
select_type SIMPLE
table post
type ref
possible_keys forumid
key forumid
key_len 2
ref const
rows 18576 <-- note1
Extra Using where; Using filesort
note1: the `post` table is just a backup version of many years ago, i did on the production server and got 10 times of the above one. it will become 12 times, 15, 20 50 100 times etc..
this number is will get increase everyday out of my control.
there should be a way to limit it down.
i don't guess it can be optimized on SQL level
may we think about Design Level?
Design Level thinking:
could it possible to do somthing like:
SELECT * FROM post WHERE page=100 ORDER BY postdate LIMIT 0, 30
it's rather complex, cos there's not only table `post` but also `thread` and many others
SELECT * FROM thread WHERE ... ORDER BY mtime LIMIT ..
SELECT * FROM thread WHERE ... ORDER BY ctime LIMIT ..
ctime=creation time=first post.postdate
mtime=modify time=last post.postdate
page_of_ctime may be easy, cos there's first post.postdate will NOT changed after one row inserted into `thread`.
but mtime? every thread with new reply, or deleted, mtime is updated! and page_of_mtime should be updated, and some other thread will have to do page_of_mtime=page_of_mtime-1
e.g.:
suppose there's 100 page, one thread in page 50 is replied, it's "moved" to page100, all "last thread" in page51-100 should be "moved" to page_of_mtime-1
i wonder how other sql servers manage to optimize it
type ALL
possible_keys NULL
key NULL
Extra Using filesort
I'm assuming the PRIMARY KEY is on the forumid column. And you do have indexes, correct?
SHOW INDEX FROM posts;
I run queries over some very large tables, very large, and response time is subsecond. If you are sure you have the table(s) optimized, about the only issue I could fathom might be a server setting. Since you are working with some big tables, you might also want to look into Tuning Server Parameters [dev.mysql.com].