Forum Moderators: open

Message Too Old, No Replies

It was great while it lasted

         

Sgt_Kickaxe

5:42 pm on Dec 8, 2022 (gmt 0)



Person using ChatGPT: "Write a Haiku from the perspective of a copywriter who is feeling sad that AI might diminish the value of the written word"

ChatGPT:

Words on a screen,
Once valued, now just a blur.
Machine takes the pen.

tangor

3:16 am on Dec 9, 2022 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Seems short a syllable. Not quite perfection.

"seventeen syllables, in three lines of five, seven, and five, traditionally evoking images of the natural world"

Sgt_Kickaxe

3:18 pm on Dec 9, 2022 (gmt 0)



Google's last update contains words suggesting they are adapting to a changing internet. A non-human generated internet.

AI won't stop learning when it perfects its Haiku. Webmasters, SEOs and content marketers are seeing the sunset right now, my friend.

What we do, a machine will do better, in milliseconds (Try GPT-3 daVinci in non-English!).

If you want a forward-looking niche idea, try entertaining people. We're becoming the Jester... lol. I jest, of course, but I'd start reinventing yourself today if you wish to remain relevant online. Ai was an infant, last year, now it's finishing high school. You will own nothing, indeed.

Sgt_Kickaxe

3:58 am on Dec 10, 2022 (gmt 0)



If I was being honest, I feel badly for all of the webmasters creating articles with chatGPT. Judging by all the suggestions from the new chatGPT gurus on youtube, they are all missing one critical point.

chatGPT can only write about what's already out there. 100% existing content reworded over and over. Remember the "more" google wanted? Yeah, chatGPT can't do any original research. It's good with the basics, not with the stuff that moves the ranking needle.

I'll tell you what it is good at, Twitter! "Write a tweet about..." and boom, complete with hashtag.

Lol. Who's the bot now?!

ronin

12:16 pm on Dec 10, 2022 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I've been intrigued by the concept of GitHub Copilot (based on OpenAI GPT) since before it launched publicly in June this year.

My curiosity does not extend to $100 a year.

Fortunately, with the Research Preview interface to ChatGPT currently accessible free and without signup, it does not have to.

I asked ChatGPT:

Can you write javascript?


ChatGPT responded:

Yes, I can write JavaScript. Here is a simple example of a function in JavaScript that takes a string as input and returns the string with the first letter capitalized:


function capitalize(str) {
return str.charAt(0).toUpperCase() + str.slice(1);
}


This function can be called like this:


capitalize("hello world"); // returns "Hello world"


I can write more complex JavaScript programs as well, depending on your needs. Let me know if you have any specific tasks you would like me to help with.

ronin

9:14 pm on Dec 13, 2022 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



This guy thinks that ChatGPT's code-generating abilities actually supersede those of GitHub Co-pilot:

OpenAI's ChatGPT is a MASSIVE step forward in Generative AI
[youtube.com...]

robzilla

10:15 pm on Dec 13, 2022 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



but I'd start reinventing yourself today if you wish to remain relevant online

It's a bit frightening isn't it? If this is possible now, what can AI do in 5, 10 or 20 years? "Make a website for my yoga studio with a page about X, one about Y," etc. And the amount of autogenerated content is going to EXPLODE. From a user perspective, if the content is good enough it might not matter; for webmasters, it spells trouble. I try not to think too much about it because it makes me sad and pessimistic, but we'll just have to adapt.

Martin Potter

2:10 am on Dec 14, 2022 (gmt 0)

5+ Year Member Top Contributors Of The Month



Frightening, indeed.

I propose a required HTTP declaration, or something better, for each published web page that specifies whether all of the content on the page in question was written by a real human being, or whether any of the content was "written" by a non-human, bot, or so-called AI.

In this way, smart people could filter their search results for results that were more likely to have originated from another person, instead of the regurgative compositions from non-humans. Certain sectors of Big Tech would not like this, of course,

ronin

9:40 am on Dec 14, 2022 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I agree there is merit in the idea that text which is AI generated could be marked up accordingly.

We already mark up languages:


<p>By contrast, it would be entirely uncontroversial to declare:</p>
<p lang="fr">Ceci est un paragraphe.</p>


So, something like:


<p>There are plenty of less well-known trivia to discover about Leonardo da Vinci.<p>

<p>For instance, two minutes ago, I discovered:</p>

<blockquote ai-source="https://chat.openai.com/chat" ai-source-prompt="Tell me something unusual about Leonardo da Vinci in twenty words or fewer." >Leonardo da Vinci was a left-handed, gay, vegetarian who was dyslexic.</blockquote>


But this - like many things relating to markup on the web - relies on information being marked up with care and attention.

If you can't get people to add alt text to their images, I rather doubt those same people are going to take the time to indicate that their information is sourced from an AI oracle which pronounces both truth and untruth with the same degree of apparent conviction.

Sgt_Kickaxe

7:31 pm on Dec 15, 2022 (gmt 0)



We're in the 4th industrial revolution now, remember?

Martin Potter

1:29 am on Dec 16, 2022 (gmt 0)

5+ Year Member Top Contributors Of The Month



If you can't get people to add alt text to their images ...

Good point. Now I see no hope in the direction that I suggested. Great while it lasted, but all downhill from here.

engine

9:31 am on Dec 16, 2022 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



I've often thought this could replace a search engine, assuming it'll crawl the Net.

robzilla

9:39 am on Dec 16, 2022 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I've often thought this could replace a search engine, assuming it'll crawl the Net.

Wouldn't it be required to cite its sources somehow? Apart from common knowledge, of course. And then you'd basically have Google.

ronin

2:59 pm on Dec 16, 2022 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I'd agree that citing sources (or providing supporting references, at the very least) is imperative.

But Generative AI is not Google / Bing / Brave etc. (though I wouldn't be momentarily surprised if they sought to harness the technology).

Generative AI (arguably) feels more similar to a text-based search engine than the latter does to anything else (wikipedia, RSS indexing etc.).

But... I'd definitely want to contend that Generative AI represents a distinct eighth information indexing / searching / retrieving technology, alongside:

- Text Search
- Image Search

- Map-based / Geo / Location Search
- Structured Semantic Metadata Indexing (Schema.org, GoodRelations etc.)
- RSS / XML Indexing

- Wiki
- Social Media

engine

3:06 pm on Dec 16, 2022 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



My thinking is that it would have a crawler, and would provide the sources. Potentially, knowledge panels are content, and these are scraped content, just as ChatGPT, and others could do. ChatGPT's content comes from elsewhere, and it's stringing that content together to make it coherent.

ronin

9:51 pm on Dec 16, 2022 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Now I see no hope


There's plenty of hope. In another thread, @Sgt_Kickaxe points to a Dec 12th article by @Martinibuster:

ChatGPT For Content and SEO?
[searchenginejournal.com...]

See point 6 in the article:

6. Invisible Watermarking
Of more interest is that OpenAI researchers have developed cryptographic watermarking that will aid in detection of content created through an OpenAI product like ChatGPT. [...] The watermarking that the researcher developed is based on a cryptography. Anyone that has the key can test a document to see if it has the digital watermark that shows it is generated by an AI.


Depending on how sophisticated this kind of cryptographic watermarking approach gets, we won't need to rely on anyone explicitly declaring that their next passage is AI-generated. The passage itself will tell us. It will have grubby AI fingerprints all over it.

P.S. This is the other thread: [webmasterworld.com...]

ronin

10:19 pm on Jan 8, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



It still feels utterly remarkable to me whenever there's a question, in response to which, other information-searching methods simply can't compete with ChatGPT.

Yesterday, I wanted to find out which HTML attributes take a value which represents the id of another element in the same document.

Obviously there's

  • href="#my-target"

    and I was also aware of:

  • for="my-target"
  • form="my-target"

    but I figured there must be more attributes and I wanted a complete list.

    I tried a good handful of queries in Google:

    list of html attributes where the value is an id of another element

    list of html attributes where the value is an id of another element, like form for

    list of html attributes which reference an id, like form for


    etc.

    Zero luck. Just... nothing. Maybe I worded all seven or eight queries badly?

    So, I figured I had nothing to lose by asking ChatGPT:

    Some HTML attributes like for and form have a value which corresponds with the id value of another element. Can you write a complete list of all the HTML attributes like this?


    And ChatGPT responded:

    Here are all the HTML attributes that have a value that corresponds with the id value of another element:

    for: The for attribute is used to specify which form element a label is bound to. The value of the for attribute must be the id of a form element in the same document.

    form: The form attribute specifies the form the element belongs to. The value of the form attribute must be the id of a form element in the same document.

    aria-activedescendant: The aria-activedescendant attribute is used to identify the active descendant element of a composite widget in the application. The value of the
    aria-activedescendant attribute is the id of an element in the same document.

    aria-controls: The aria-controls attribute is used to identify the elements controlled by the current element in the application. The value of the aria-controls attribute is a space-separated list of element ids.

    aria-describedby: The aria-describedby attribute is used to identify the elements that describe the object. The value of the aria-describedby attribute is a space-separated list of element ids.

    aria-flowto: The aria-flowto attribute is used to identify the element that flows to the object. The value of the aria-flowto attribute is the id of an element in the same document.

    aria-labelledby: The aria-labelledby attribute is used to identify the elements that label the object. The value of the aria-labelledby attribute is a space-separated list of element ids.

    aria-owns: The aria-owns attribute is used to identify the elements owned by the current element in the application. The value of the aria-owns attribute is a space-separated list of element ids.

    I hope this helps! Let me know if you have any other questions.


    So I asked:

    Thank you. Are there any other attributes which should be included in the list above?


    And ChatGPT responded:


    The list I provided includes all of the HTML attributes that have a value that corresponds with the id value of another element. These attributes are used to create relationships between elements in an HTML document, and the value of the attribute refers to the id of another element in the document.

    Here is a summary of the attributes I listed:

    for: Associates a label with a form element.
    form: Associates an element with a form.
    aria-activedescendant: Identifies the active descendant element of a composite widget.
    aria-controls: Identifies elements controlled by the current element.
    aria-describedby: Identifies elements that describe the object.
    aria-flowto: Identifies the element that flows to the object.
    aria-labelledby: Identifies elements that label the object.
    aria-owns: Identifies elements owned by the current element.


    I am nothing short of phenomenally impressed.
  • Sgt_Kickaxe

    12:22 am on Jan 9, 2023 (gmt 0)



    I am nothing short of phenomenally impressed.

    Ditto, but what's probably scary/embarrasing for Google is that they had just announced how awesome their new algo was a few months earlier.

    In the video announcement they declared that search could now understand complex questions.

    The example query Google gave, I believe, was along the lines of "I climbed mountain A last fall using Acme brand boots. I plan to climb mountain B next spring, should I get better boots?"

    This means Google would need to understand different mountains, different times of year and different footwear. Yes, that was impressive, but chatGTP can do that and throw in a whole lot more layers of information, too.

    Also scary for Google, all Google can return is a list of sites, an image carousel, maybe a featured snippet, but not a direct detailed answer, yet. Google's latest advancements put them in the minor leagues once again. I'm sure they could instantly catch up with their own chatbot, but not collect tens of billions of dollars per quarter with it.

    Ironically, both are behind what you'd expect from an actual climber who has experience on both mountains... and it's unlikely THAT person wants to create a website for Google to link to given how PIA Google has made website ranking. Who has time?

    Information delivery has changed forever now that chatGPT is out. Search is about to do the same to keep up, IMO.

    It's almost enough to get a feller, who has been around since before Google, excited again. Almost. Many informational sites are about to become ghost sites with little traffic, sure, but it's great for helping people learn. Humanity needs that more right now, IMO.

    Traditional search with SERPs is going to become less important moving forward, the next "thing" is definitely here.

    What REALLY interests me now is seeing how the ad bucks move over to the new information type. I'm not sure they will given what else is happening. In that case a boot rental shop at the foot of the mountain would be required. Boots you use once a year could be used daily, you'll own nothing.

    Good for the environment, good to know you don't need to worry about choosing boots to be able to climb, but not good at all for driving innovation, creating jobs. It would even make SEO, marketing and most other online "jobs" obsolete.

    I DO wish these people had thought it through and asked "should we" BEFORE they did this... but hey. Good luck!

    ronin

    11:24 am on Jan 9, 2023 (gmt 0)

    WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



    Aside:

    but not good at all for [...] creating jobs. It would even make SEO, marketing and most other online "jobs" obsolete.


    I was under the impression we've been trying to make jobs obsolete since we succeeded in domesticating horses, 6000 years ago. I'm sure we could point to other examples even further back. I thought that was sort of the whole point of quite a lot of what we've been doing for the last six millennia.

    =====

    Separately, I agree with you that AI Chatbots represent a new paradigm but I'm not persuaded (yet) that this is like CDs comprehensively replacing cassette tapes or Netflix comprehensively replacing Blockbuster.

    It might be something more like:

    - there are information searches where you turn to Google
    - there are information searches where you don't bother with Google and go straight to Wikipedia
    - there are information searches where you don't bother with Google or Wikipedia and go to ChatGPT instead

    I'll need to give it some thought to articulate what makes me look something up on Google and what makes me look something up on Wikipedia instead - but I know that I know intuitively which oracle is better-suited to what kinds of information search.

    I suspect that AI Chat will be a third type of oracle which, if anything, helps define what search engines like Google, Bing, Brave etc. are for and what they are not for.

    Sgt_Kickaxe

    3:56 pm on Jan 9, 2023 (gmt 0)



    I was under the impression we've been trying to make jobs obsolete since we succeeded in domesticating horses, 6000 years ago.

    People tamed horses to make travel and some jobs easier, not to make a job obsolete. A person on a horse was more productive and mobile, not replaced etc.

    Hey, maybe we can ride the robots and not be replaced! j/k

    Separately, I agree with you that AI Chatbots represent a new paradigm but I'm not persuaded (yet) that this is like CDs comprehensively replacing cassette tapes or Netflix comprehensively replacing Blockbuster.

    Possible, time will tell, but I don't think it's JUST another option though. It has the potential to take away Google's dominance by making their search results less important, enough that it spooked Google into calling a "code-red".

    [edited by: Sgt_Kickaxe at 4:22 pm (utc) on Jan 9, 2023]

    robzilla

    7:59 pm on Jan 9, 2023 (gmt 0)

    WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



    I am nothing short of phenomenally impressed

    It's impressive, for sure, but how do you know the answer is correct, or complete, or up-to-date? And for many questions, there's not one definitive answer. It would be dangerous for a ChatGPT-like service, i.e. not an experimental toy like ChatGPT itself, to hand out information that is purportedly the truth.

    The advantage of a search engine like Google's is that you know where the information is coming from, and that it will answer not only informational queries but also shopping and, well, just about every query imaginable. It's a single point of entry. I never go straight to Wikipedia, if I want the wiki on a topic I enter "(topic) wiki" (or even without the "wiki" for many queries, knowing it will surface) into the omnipresent search box and Google will direct me. I don't see any ChatGPT-like service replacing that any time soon.

    all Google can return is a list of sites, an image carousel, maybe a featured snippet, but not a direct detailed answer, yet.

    I don't think it's so much a matter of ability, and I wouldn't underestimate how much headway Google's made in the field of AI; they're likely among the forerunners. Just because you can provide a direct answer doesn't mean that you should, or are willing to accept that responsibility.

    Google, Meta and other tech giants have been reluctant to release generative technologies to the wider public because these systems often produce toxic content, including misinformation, hate speech and images that are biased against women and people of color. But newer, smaller companies like OpenAI — less concerned with protecting an established corporate brand — have been more willing to get the technology out publicly.

    [nytimes.com...]

    ronin

    11:22 pm on Jan 9, 2023 (gmt 0)

    WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



    It's impressive, for sure, but how do you know the answer is correct, or complete, or up-to-date?


    You absolutely don't. Hence why my second question was: Are there any other attributes which should be included in the list above? - which isn't close to a failsafe but I was prompting ChatGPT to correct itself, if it could.

    Your question (which we should all be asking, all the time, of all information sources) is a good argument for ChatGPT and similar programs to give sources to the best of their ability. Wikipedia, after all, gives sources (albeit the process is vulnerable to manipulation, see: [xkcd.com...] ). And Google, arguably, gives nothing but sources.

    And for many questions, there's not one definitive answer. It would be dangerous for a ChatGPT-like service, i.e. not an experimental toy like ChatGPT itself, to hand out information that is purportedly the truth.


    Yes, it would.

    Sgt_Kickaxe

    11:25 pm on Jan 9, 2023 (gmt 0)



    I wouldn't underestimate how much headway Google's made in the field of AI


    Oh I don't, and didn't. They could release a chatbot of their own tomorrow. With the code-red being called, and employees pulled off their other projects to focus on chatGPT concerns, I think it's Google who overestimated how long they had to reach this stage. I suspect they haven't found a way to keep making 10's of billions of dollars per quarter if old-style SERPs no longer dominate search.

    I think Google's known of the monetization issues for some time, monetizing videos has been difficult, monetizing plain text the way chatGPT provides it would be more difficult still.

    Prediction: A chatbox will be designed to return the response in a full-page format that looks just like a webpage, but is entirely AI content. It's the most likely scenario given that ads could be put on it and there is a ready platform to draw them from.

    Problem #1 - if people do that it's spam, so would Google do it? If you make your web pages look like search results with nothing but ads above the fold that's spam too, but Google does it.

    Problem #2 - Google doesn't create content, these types of web-page chatbox responses would be drawn from information taken from copyrighted sources, but would send them no traffic. The symbiotic relationship between search and content producers would become fully parasitic.

    Google definitely has a code-red to figure out, it will be interesting to see which route they take over the next couple of years.

    Google, Meta and other tech giants have been reluctant to release generative technologies to the wider public because these systems often produce toxic content, including misinformation, hate speech and images that are biased against women and people of color.


    Like you described yourself having, people have a sense on what they can trust, how far they can trust it, and what keeps misleading them. That article applies to all information, including the source of the article itself.

    It's a very old argument. At one point books were called dangerous and routinely burned by people who didn't want others reading them. We got past that, lets not bring it back. Let the police do the policing and enjoy the new tech options, IMO.

    robzilla

    12:18 am on Jan 10, 2023 (gmt 0)

    WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



    Like you described yourself having, people have a sense on what they can trust, how far they can trust it, and what keeps misleading them. That article applies to all information, including the source of the article itself.

    A sense of trust is not some magical thing, it comes from signals of trust. Currently ChatGPT does not signal any trust at all, it's a computer spitting out processed information drawn from unknown sources. The article I referenced is written by two named and experienced journalists, for one of the leading US newspapers, and is sprinkled with references to similarly verifiable sources and relevant articles.

    It's a very old argument. At one point books were called dangerous and routinely burned by people who didn't want others reading them. We got past that, lets not bring it back. Let the police do the policing and enjoy the new tech options, IMO.

    Their reluctance to release such generative technologies because they are prone to producing toxic content is, in your opinion, comparable to the burning of books? But you do want the content to be policed, so their reluctance is justified? I can't get a grasp of your argument here.

    Sgt_Kickaxe

    2:12 am on Jan 10, 2023 (gmt 0)



    Their reluctance to release such generative technologies because they are prone to producing toxic content is, in your opinion, comparable to the burning of books? But you do want the content to be policed, so their reluctance is justified? I can't get a grasp of your argument here.


    *Sigh* - yes, POLICED BY ACTUAL LAW ENFORCEMENT TASKED WITH ENFORCING ACTUAL LAW. I can't state it more simply than that.

    Of course I want chatGPT to be held accountable if they break any law, or your fears about what *MIGHT* happen do happen, and it leads to lawlessness that needs enforcing. chatGPT doesn't have to join the "trusted big tech" group, though, to do what they are doing, regardless of anyone's fears. Is that clearer?

    they are prone to producing toxic content is, in your opinion, comparable to the burning of books?

    I didn't say that anyway, I said chatGPT doesn't need to be burned just because "trusted big tech" didn't create it, regardless of what two NYTimes authors studied. Again, laws > fears, IMO.

    So far I don't see chatGPT going full Nazi like Bing's bot did, and I'm sure that's frustrating a lot of meme makers, and others, who are trying to make that happen. Progress.

    Can we get back to the tech though? A company did a study and found that 63% of people weren't even aware they were using generative AI tech, in 2019. I imagine that number has come down now but I bet people don't realize how many companies are actively developing it - [ventureradar.com...]

    It's all the rage on Wall st right now, thanks to them NOT going full Nazi, lol.

    phranque

    3:16 am on Jan 10, 2023 (gmt 0)

    WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



    chatGPT doesn't use a bot...

    Sgt_Kickaxe

    3:19 am on Jan 10, 2023 (gmt 0)



    chatGPT doesn't use a bot...
    I didn't say they did. Bing's Twitter bot tested their latest AI online in 2016, and it went "full Nazi" impressively. [cbsnews.com...]

    We've been hearing about fears that AI will create toxic content against various groups ever since. I don't share those fears, obviously, because laws exist against it and because it was a miscalculation on how people would treat a bot when it was exposed as much as the tech not being ready.

    I doubt it's ready now, Google's code-red tells me it's getting close, but chatGPT hasn't gone "full Nazi" yet. So far so good, right?

    [edited by: Sgt_Kickaxe at 4:13 am (utc) on Jan 10, 2023]

    Sgt_Kickaxe

    3:25 am on Jan 10, 2023 (gmt 0)



    Enough already, back to the tech.

    Speaking of law, an AI-based robot is set to defend a human in court for the first time In history. [in.mashable.com...]

    robzilla

    10:08 am on Jan 10, 2023 (gmt 0)

    WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



    chatGPT doesn't have to join the "trusted big tech" group, though, to do what they are doing, regardless of anyone's fears. Is that clearer?

    Not much. Who said they had to?

    Any majorly successful tech company is bound to join "Big Tech" at some point, by sheer popularity and influence, whether they like it or not.

    I didn't say that anyway, I said chatGPT doesn't need to be burned just because "trusted big tech" didn't create it, regardless of what two NYTimes authors studied.

    Who argued for that, then?

    *Sigh* - yes, POLICED BY ACTUAL LAW ENFORCEMENT TASKED WITH ENFORCING ACTUAL LAW.

    It's not just about law enforcement, content can be both toxic and lawful.

    ronin

    11:17 am on Jan 10, 2023 (gmt 0)

    WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



    fears that AI will create toxic content


    An AI can only output according to 1) its dataset and 2) how it processes that dataset.

    Tay read prejudice, treated it as agnostically as it treated everything else, didn't filter it out and so regurgitated it.

    ChatGPT has biases built-in, so that it may recognise prejudice and won't treat it as agnostically as everything else.

    There are only two solutions if you want to fine-tune the AI output so that the AI doesn't sound like humanity:

    1) Curate the input

    2) Add filters to recognise certain kinds of input and process them differently
    This 39 message thread spans 2 pages: 39