|Are We There Yet?|
Data overload. Afraid or unable to decide. Let the machine do it!
Once we intuitively kept count (bigger, smaller, etc.) in our heads. Then came math and accounting systems (stones, seeds/grain, fiat currency, the abacus) and tracking systems (sheepskin and ink, parchment and ink, paper and ink, paper and typeset) early analysis tools (slide rulers, calculators, etc), then punch cards and early analog computing - all somewhat tangible and limited in their capacity, very analog.
Then came digital and Moore's law and supercomputers and ubiquitous networking.
For years I've been reading about the growing volume of "data" and "information". (Information is not to be confused with data, as it's a bit more loosey goosey, albeit data can also be collected and interpreted in slanted, selective, etc. ways.)
It's both amazing and not so amazing that so much data is being sought by tracking, monitoring, listening, etc. Every day there's a new device or system created to collect, record, analyze and report on data from any and all "changes in the system or status quo" - be that a human body, an enterprise, etc.
We've always been awash in data, albeit we either were not so consciously aware of its presence or role or significance. However, we sure as heck are being made aware of it now. The NSA spies. Analytics is everywhere. Every point of contact, every action becomes an occasion or opportunity for someone to observe and collect more data.
And decisions must be made.
Who is going to risk deciding without looking at THE DATA . . ALL THE DATA . . AND . . make sure we HAVE ALL the data!
So, given the ever growing glut of data I pause to ask a few questions, likely ones that anyone and everyone might want to pin to a board somewhere . . for the next decade.
Are we reaching the point where answers to questions involving "the data" are approachable, but unanswerable, by humans due to the sheer volume of available data?
Are we at the point where machines are making more decisions than humans? Are machines also taking over the role of "writing the decision making code" - to process the data, interpret the data, etc - due to humanity's limited ability to write code complex enough to process the data, etc?
When we rely on computers to crunch and analyze "the data" DO WE KNOW, that deep down in the code, what's going on . . that the machine is "getting it right"?
When there is so much data and so much of decision making is being assigned to software . . how do we know that we know when we no longer are doing the thinking?
With so much data available will simple questions cease to be respected, meaningful or even be allow to exist?
Is there such a thing as too much data producing bad results?
As with most tools that of computerised data collection, manipulation, and analysis has both good and not so good, hysterical and horrendous uses.
I remember the first time I watched a 3-D map rendering: it took 3-days for massive supercomputers to create the end result, some months back I created my own on a network of 12 linked (each at least a few years old) linux boxes and it took 20-minutes.
The NSA 'problem' noted by Mackin_USA's above link shows the folly of simply collecting data 'because one can'. When one is looking for a needle in a haystack adding more haystacks to the problem isn't a viable solution. I've seen the same erroneous mindset at work in website analysis. Just because the tools allow something doesn't mean that it is worth doing.
With so much (and steadily increasing) computing power and speed at our fingertips, with the plunge in storage and memory costs it seems so reasonable to simple suck up everything so that if/when one thinks up something to do with it one can go back and take a look. I mean, I have all my log files back to 1997...on thumb and CD and floppy and :) Yes, I really should update some of that storage before the drives stop working...
I read about the software aka machines making financial market trades, where being in the same building, block as an exchange gives microseconds of advantage; where it is worthwhile building a dedicated fibre line between Chicago and New York for a 5.2 millisecond advantage...
The underlying problem with automation be it in financial trades, transit, what have you is twofold: (1) the assumptions built into the program and (2) the errors that exist in any software are compounded in complex dense software. When it works as expected it is truly amazing, when it glitches it can ruin a company or kill hundreds.
Given that actual operation is subject to splutters or freezes, running wild or unexpected tangents is it surprising that security including privacy is, at best, an afterthought? And as with most bolt-ons doesn't work that well? We are letting the toolmakers set the agenda rather than setting our own goals and using tools selectively to help us get there.
Being human it is mistakes that drive lessons most firmly home. Presuming we survive hackers, crackers, and organisations run amok we may actually get a quite wonderful technical foundation easing our life on this world and even getting us to others. On the other hand (in)advertent financial meltdown, power grid breakdown, systemic records crash... we are a fascinating but fragile computerised information society.
Information. We, as a society, do need to decide what to do with it.
Food for thought:
Does Analytics Make Us Smart or Stupid?: [forbes.com ]
|While psychologically too much data reverses our rational behavior, organizationally it produces increasingly rigid rational control. Organizational activity becomes directed by algorithms, characterized less by judgment and more by reaction. |
Will Big Data Replace Strategy: [outofmygord.com ]
|The drawback of data is that it needs to be put into a context to make it useful. |