ALbino - 4:03 am on Sep 2, 2011 (gmt 0)
I'm not much of a SQL guy, so I thought I'd ask you guys here.
I've got a huge database I've been handed which has 50,000 rows with 200 columns. Interestingly after the first couple of columns each one is 4 sequential repeats of each other, for example:
ProductID/Brand/Type/ followed by SmlHeight/SmlWidth/SmlWeight/MedHeight/MedWidth/MedWeight/LrgHeight/LrgWidth/LrgWeight/etc
Imagine that, but in 4 groups of 50 totaling 200. They're all just Height/Width/Weight broken down into 4 versions of each other.
Running speed queries with my 200 column table it doesn't seem to matter if there's 50,000 rows or 5,000 rows, they all seem to be about the same speed. However, if I reduce the columns from 200 to say 50 then the query speed is significantly sped up.
So, is there any reason it's a bad idea to just chop those up into 4 slightly repeating rows? I.E.
Even though probably 10 columns (ProductID/Brand/Type/etc) would all be exact duplicates, the rest are unique unto that row.
I guess my question really breaks down into this; which of these is the best option?
A) 50,000 rows with 200 columns
B) 200,000 rows with 50 slightly repeating columns
C) 4 separate tables with 50,000 rows and 50 columns
Simply chopping up this database into parts is probably pretty easy, anything more than that is certainly beyond my skill. This isn't a large company or a huge website with millions of visitors, so I really just need whatever is the smart and quick choice.
Any help would be appreciated. Thanks!