homepage Welcome to WebmasterWorld Guest from 54.197.19.35
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Subscribe to WebmasterWorld

Home / Forums Index / Code, Content, and Presentation / PHP Server Side Scripting
Forum Library, Charter, Moderators: coopster & jatar k

PHP Server Side Scripting Forum

    
How to pass a multidimensional Array into a database?
adder




msg:4630154
 7:00 pm on Dec 12, 2013 (gmt 0)

Hi,

I've got a multidimensional array and I'm trying to save it in a MySQL database. I suppose I have to use nested foreach statements but it somehow doesn't work.

This is my array structure. The Awards section is especially tricky because it may have a different number of elements.
(
[0] => Array
(
[name] => John Brown
[id] => 123
[awards] => Array
(
[0] => none
)
)

[1] => Array
(
[name] => Jim White
[id] => 124
[awards] => Array
(
[0] => Fellowship
[1] => Award for excellence
)
)

[2] => Array
(
[name] => Mike Green
[id] => 125
[awards] => Array
(
[0] => none
)
)
)


This is a bit of code I tried to write to test it

foreach ($myArray as $numbered => $colleagues) {
foreach ($colleagues as $colleague => $specs) {
echo $specs['name'];
echo "<br>";
echo $specs['id'];
echo "<br>";
}
}


It doesn't really work. It will also have to have a third foreach statement for the "Awards" but I'm not sure how to implement it. Do I need another if/elseif statement to check how many elements the current Awards section contains? Any help would be really appreciated.

Thanks!

 

swa66




msg:4630166
 7:28 pm on Dec 12, 2013 (gmt 0)

What's your data model?

I'd model it as a relation, but well I like my data to be normalized.

omoutop




msg:4630401
 1:01 pm on Dec 13, 2013 (gmt 0)

you need to save each element of the array to specific fields in your database?
or just save the array as it is?

In the first scenario you will need multiple foreach() loops.
In the second scenario, you can serialize() or json_encode() it and save it

JD_Toims




msg:4630623
 10:43 am on Dec 14, 2013 (gmt 0)

How did I know swa66 would like their data normalized? LOL -- That being said, I like swa66 and think they're great at coding, even if we have two different styles in some situations and I'm not even going to argue which is better, because with PHP and MySQL so many things are possible and so many things are situational, it could be an indefinite discussion.



Personally, in your situation, I'd prefer to do something along the lines of "as small as possible" single table storage with correlation on the way in/out -- In your system, I'd likely go with something like:

$main_array = array(
[0] => Array
(
[name] => John Brown
[id] => 123
[awards] => 0
)

[1] => Array
(
[name] => Jim White
[id] => 124
[awards] => 1,2
)

[2] => Array
(
[name] => Mike Green
[id] => 125
[awards] => 0
)
)

Then associate the "word values" of awards 0, 1, 2, etc. with "human readable values" on-the-way-in and on-the-way-out, which might even end up normalized, but would be, imo [I haven't tested your situation specifically, so I'm not certain], much easier and faster to work with than multiple tables, due to indexing and number of db requests necessary to get the info stored back out and presented to a visitor.



Your foreach() loop could then look something like this:

foreach($main_array as $sub => $val) {
echo $key.' = '.$val.'<br>';
}



Keep in mind a foreach() is usually slower than a for(), so if you want to gain a "blip" a for() is usually better -- Also, keep in mind I'm tired, so I could be one loop off [I think I am], but it's time for me to go to sleep, so I'm not going to look at this any more and try to make sure I said what I was thinking the way I thought I wanted to say it :) LOL

swa66




msg:4630709
 8:24 pm on Dec 14, 2013 (gmt 0)

much easier and faster to work with than multiple tables, due to indexing and number of db requests necessary to get the info stored back out and presented to a visitor.

I think you need to take a deeper look at using joins if you need more than a single request to get your data out of multiple tables in one go. It's pretty easy once you get it.

JD_Toims




msg:4630932
 10:31 pm on Dec 15, 2013 (gmt 0)

I've used them -- On the "real-world sites" I've tried them on they're not as fast as "SELECT what_I_need FROM big_table USE INDEX (the_single_corresponding_index) WHERE var=value"

And, I meant multiple requests as in, even with JOIN you're requesting from 2 [or more] tables -- that's more requests wrt at least two times each of the following: indexes to find/read, related var=vals to evaluate, pointers to find within the indexes, storage locations for the info, "different parts of the disk to hit/read" to get the info out, etc.

If you get to three or four JOINS, you're tripling or quadrupling the time and "effort" to find and get the information as compared to a single index pointing to where the info is on the disk.

It's just not as fast in the real-world testing I've done to use JOINS -- Maybe MySQL has changed something since I've tested, but relational, even using JOIN has always been slower for me than a "big table" design like Google originally used -- I don't quite have the processing power of Google [lol], so I have to maximize everything I can wrt speed when speed is a factor.

johnhh




msg:4630954
 12:36 am on Dec 16, 2013 (gmt 0)

@JD_Toims
From my view point - you are correct - especially when it comes to putting data up on the screen as fast as possible, too many table reads. Trouble is - 'normalising' is what they are taught in class, especially at university level, forgetting that basically that's theory, real world is different.
However - it does depend on the number of columns in the table.
no wish to get into an argument on this - just the result of years of experience

JD_Toims




msg:4630956
 12:50 am on Dec 16, 2013 (gmt 0)

Thanks johnhh -- I agree about not wanting to get into an argument about it, so I'll just leave it with:

If someone's storing "critical data", such as in a bank type setting, then by all means, forget about speed gains and normalize absolutely everything.

For most websites though that's not necessary, and since even 0.2 to 0.4 seconds can make a difference in visitor use/satisfaction* I "err on the side of flat-out speed", which, again from my testing, means throw out the normalization and go with a "big table" design almost every time.

* For a source on the importance of speed, see Google's test results from a few years ago when they added delays in showing the results to their visitors to see if page load speed is a factor wrt visitor behavior and satisfaction -- I don't have time to track it down right now, but IIRC 0.2 seconds made a noticeable difference in use and 0.4 second made a fairly large difference in use.

swa66




msg:4631133
 2:26 pm on Dec 16, 2013 (gmt 0)

If you seek speed: cache your page as static html, you'll gain much more by not dynamically generating anything than the difference a join or two will make.

While denormalizing is an option to solve a real performance issue (not the only option), but I'd never consider it unless it is in fact a real problem - if you never had the data normalized you've no idea if it is a problem or not and even then only if it can't be fixed otherwise.

mysql nowadays is very efficient at doing joins in my measurements. In fact any proper relational database is created to be optimized to perform them efficiently.

But unless you look at the query plan in question you can't know anything.

adder




msg:4631148
 3:29 pm on Dec 16, 2013 (gmt 0)

whoa, thanks guys for the detailed input. Sorry I neglected this thread but this has been a crazy weekend :)

To answer the question about the data model, it would definitely be a single table. Ideally, I'd like to end up with something like this:



| id | name | award1 | award2 | award3 |
| 123 | John Brown | NULL | NULL | NULL |
| 124 | Jim White | Fellowship | Award of Excellence | NULL |
| 125 | Mike Green | NULL | NULL | NULL |



I think the biggest challenge is the awards because some will have none, others will have one or two while a few will have all three.

JD_Toims




msg:4631286
 9:10 pm on Dec 16, 2013 (gmt 0)

While denormalizing is an option to solve a real performance issue (not the only option), but I'd never consider it unless it is in fact a real problem - if you never had the data normalized you've no idea if it is a problem or not and even then only if it can't be fixed otherwise.

I had one of the DBs I tested on at 3NF ;)

[I will say it was quite some time ago so MySQL may have [likely has] improved in the interim, but I won't take the time to go back an renormalize, because it's worked fine with a single table design for enough years to not bother with when I could be doing much more productive things with my time than fixing that which is not broken -- I'm going to bow-out of the storage discussion, unless there's another specific question about it, because it's really a matter of styles, imo -- Google ran on a "big table" design until the Caffeine update and WebmasterWorld still runs on a single flat-file DB, so there's some precedent for the type of storage I use.]



I think the biggest challenge is the awards because some will have none, others will have one or two while a few will have all three.

Personally [this'll make swa66 scream -- lol] I'd store them in a single col as a numerical csv list EG 0 for none; 1,2 for award one and award two; 2,4 for awards two and four; etc, because then you don't have to update the DB table to add another award.

The way you have the DB laid out, you'd have to add a new col for every new award, but if you put them all in the awards col as a csv list you can add, or delete the awards available by simply changing what's in the col and the array that's being associated with the numerical values in the col.

# Note: This only applies if you don't want to be able to select everyone with award 3 -- If you want to do that, then you'd have to use separate cols to be efficient, but I'd probably keep a "total awards" csv list col too so I could grab that rather than needing all the individual cols for "all awards" -- I know, I know, it's not normalized that way, because I'm storing the same info twice :( [lol]

Then once the data is in PHP it's simple to use an array with the numbers from the DB csv list as keys.

Something like:

$awards=array();
$awards[]='None';
$awards[]='1st Award';
$awards[]='2nd Award';
$awards[]='3rd Award';
$awards[]='4th Award';

# Get the data from the DB here.

$associate_awards=explode(',',$the_db_award_numbers);
$count=count($associate_awards)

$awards_held=array();
for($i=0;$i<count;$i++) {
$awards_held[]=$awards[$associate_awards[$i]];
}

# I'll let you figure out how to format whatever from there.

print_r($awards_held);

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Code, Content, and Presentation / PHP Server Side Scripting
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved