|brotherhood of LAN|
| 6:36 pm on Dec 14, 2010 (gmt 0)|
I've used it and not seen any kind of performance hit, it's a handy function. Just ensure the separator used doesn't exist somewhere in the data.
Worth noting this from the manual [dev.mysql.com] when using it
|The result is truncated to the maximum length that is given by the group_concat_max_len system variable, which has a default value of 1024. The value can be set higher, although the effective maximum length of the return value is constrained by the value of max_allowed_packet. The syntax to change the value of group_concat_max_len at runtime is as follows, where val is an unsigned integer: |
SET [GLOBAL | SESSION] group_concat_max_len = val;
| 8:33 pm on Dec 14, 2010 (gmt 0)|
well, it does seem to be very CPU consuming.
at least on my end where I have thousands of rows..
I'm leaning towards building a query without this function and then grouping regions with php.. but it also is not the best solution.
how do you guys handle this issue?
maybe indeed use the group_concat function, but only for the resultset that is currently displayed (active page when paginated)..
what's the general rule of thumb in this case?
| 2:31 pm on Dec 15, 2010 (gmt 0)|
I use group_concat in my indexer (sphinx search) and on occasion in some "tabbed" output. And BOL is correct, watch out for that little truncation gotcha! I run a pre-query on sites when I don't have control over the MySQL conf:
$sql = "SET SESSION group_concat_max_len = $intValue";
... where $intValue is large enough to handle my expected results without going over the max_allowed_packet. A handy way to use group_concat in your query in order to avoid the "separator" issue to which BOL referred is speicfy NULL as the separator in your string [dev.mysql.com]:
However, I don't think any of this has to do with your performance issue, or at least not likely. I do see another issue though that you may need to investigate. It's called the "single-value rule" and perhaps you need to start there? Just a suggestion.
| 7:30 pm on Dec 15, 2010 (gmt 0)|
You're right guys. I mean the group_concta unction is doing pretty well in terms of performance. It's the Zend Framework's PDO that is consuming lost of CPU.. I think I should switch to CodeIgniter..
BTW, I have another "simple" problem with an sql query:
select r from regions where active_toggle = 1
I set an index on the active_toggle field. It's tinyint(1).
when I run it with EXPLAIN:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE region ALL active_toggle_idx NULL NULL NULL 45 Using where
why it is not using the index?
|brotherhood of LAN|
| 8:10 pm on Dec 15, 2010 (gmt 0)|
This may be pertinent; does the resultset returned amount to a large % of the table?
|Sometimes MySQL does not use an index, even if one is available. One circumstance under which this occurs is when the optimizer estimates that using the index would require MySQL to access a very large percentage of the rows in the table. (In this case, a table scan is likely to be much faster because it requires fewer seeks.) However, if such a query uses LIMIT to retrieve only some of the rows, MySQL uses an index anyway, because it can much more quickly find the few rows to return in the result. |
You may want to try/benchmark using the USE INDEX [dev.mysql.com] syntax and see if it is faster in your case:
select r from regions use index (active_toggle) where active_toggle = 1
| 7:26 am on Dec 16, 2010 (gmt 0)|
yes, this is it.
actually all regions are active (active_toggle=1 for all records).
is this ok? those queries are marked as slow ones..
not sure what to do in this case. tried to clean up my queries..
|brotherhood of LAN|
| 8:13 pm on Dec 16, 2010 (gmt 0)|
Dropping the index wouldn't do any harm, and if all rows are going to match, drop the WHERE clause too (since you say all rows match the criteria)
Also check out memory tables [dev.mysql.com]
| 9:50 am on Dec 18, 2010 (gmt 0)|
Thanks. Removed the index as you suggested. Although those queries are sstill being logged as slow ones (not-using-indexes), I need to trust you it's fine.
I'd have another db question. I have a huge table which holds over a million records (blog posts for various blogs). I need only the last 10 for every blog.
Is it better to create a view or create another table and copy only the fresh entries?
how would you do that?