Currently I am doing the whole limit 0, 30 deal. This works great for about my first 20 pages of results. However it is very slow for pages 100+ . I understand that the issue is that it has to go through 3000 records to get my next thirty thus it would slow down. How can I update the table structure to speed it up.
An index over your table(s) that exactly matches your query can speed up things a lot, but at the end MySQL has to step through all the records until it reaches number 3001..3030. This causes the time to increase linearly with the pagenumber people are viewing.
If the data doesn't change often, you might consider to dump the result of the query in a temporary table and give each record in that table a unique sequence number starting at 1. With a PRIMARY index on that sequence number, you can jump directly to the wanted records with the WHERE clause in a SELECT statement.
For example for page 101, you can request the records with ID number 3001..3030 from your temporary table which will be equally fast as requesting the records 1..30 for page 1.
That sounds like what I want to do exactly. I kind of thought that would be the way to go, however I am not sure on how to implement it. The table will never change without me changing it. So I could make it so that after I update the table I update the temporary table. I haven't really ever dealt with Temporary tables (as I have never needed to in a couple years of web developing). Can you give me a basic example of how to do this in pseudo code? I basically would like to see how to make the temporary table and how to update the table without disabling the query when I make changes to it.
I was just reading about temporary tables. It looks like they are dropped as soon as that page is done. If that is so i don't think it would speed it up. Would a similar thing with a non-temporary table get teh same results?