Accuracy of AI Summaries: What have you Discovered?
Wrong and false data/info, even non existent stuff.
I've been doing editorial work for clients regarding their projects (diff nature). Some stuff I already know, but sometimes I need to search for information in order to provide a comprehensive background of information for them to understand, and this also requires sources (with reputation), because some pieces of data may or will create debate with other authorities. Some stuff... I don't know, it's new to me, but I know the process of finding sources with good reputation (at least 3 of them), read, check, compare, and even check those sources to find out who they are or what they do, etc.
During this process, I've found on AI summaries (on Google) stuff totally made up.
Due to the fact that most of my work happens in my native language (spanish), it's funny to see made up words, or translations from english concepts/words, that don't exist in spanish. I don't remember right now, but I came across a word that doesn't exist, being said as valid, and it's meaning being also described, source? RAE (Real Academia Espaņola), that was fun, and when you check the information (regarding the sources) there was absolutely no link to the RAE.
I happen to have knowledge about my own languate (spanish) way beyond the average user/writer/reader, so I find this both funny, and shocking. It's a problem, because many MFA websites have been created using AI stuff, and then, students use those "word meaning" websites as sources to back up their work (even university teachers have been doing this), and this propagates FALSE information.
The problem is having people telling you it's true because it appeared on Google.
Go figure.