Yay or Nay on AI?

2 Likes

One listing seeking a senior software development engineer says the company is “reimagining Amazon Search with an interactive conversational experience” designed to help users find answers to questions, compare products and receive personalized suggestions.

youve-got-to-be-kidding-me

this-is-gonna-hurt

2 Likes

Go to the blog to see Baby Yoda and Antarctica: (CNN) This is what happens when ChatGPT tries to create crochet patterns :laughing:

If there is one piece of wisdom I gleaned from this experiment, it’s that human intelligence is fundamentally interdisciplinary. Language bleeds into sight, which tangles with memory or personality and so on. Artificial intelligence programs don’t really work that way.

Most damningly, both suits suggest that the mere existence of these AI models are illegal under the Copyright Act since they need to be fed with potentially copyrighted information in order to work as anticipated.

2 Likes

AI is $HIT!

All you need to be is an Amazon seller to know this and we’ve known this for years.

It has it’s place for certain things but when it comes to critical decision making - NOPE…

Sorry

3 Likes

The basic problem with any “large language model” AI, like Chat-GPT and its ilk is that it was “trained” with large piles of text FROM THE INTERNET.

So, what it has accepted as “fact” or “useful” is riddled with stuff typed by ignorant people, who had no credentials in the field, and were “debating” or “arguing” rather than making declarative statements.

So, the AI can BS its way through many situations, if the reader is less-informed on the subject, but if one speaks with Chat-GPT about something with which one is familiar and slightly expert, the illusion falls apart. So it can entertain, but never accurately inform. Its rather like Fox News in that regard.

Now, if you take an AI and train it with nothing but peer-reviewed papers published in the major journals, now you’ve got something more useful, but still not 100%, as it will make connections that are not there, mostly making the classic errors of correlation/causation (more ice cream sales don’t result in more murders, its the heat that tends to increase both).

Neural nets are great, but the AI-powered “winged feet of Mercury” has not appeared in 30 years - all we have are the muddy boots of “expert systems” which are very very good at one VERY NARROW thing.

And even in expert systems, all that linear algebra is untraceable, un-loggable, un-auditable. If it screws up, one can only tell from the outcome being bad, and there’s no way to tell when it might make the same error again. All you can do is throw more data into the maw of the neural net, and try to “train it” with data that contradicts the error made. (Of course, this assumes that you actually understand the source of the error, which likely no one does, so we end up with something that feels like a chapter out of Douglas Adams’ “Hitchhiker’s Guide to the Galaxy”…)

Can one trust an AI?
Yes, but only as far as one can comfortably spit a rat.

Or even adventure modules. They need a little tweaking, but they can give you a really nice outline of each chapter of an adventure that you can then build off. Don’t ask for specific CR stat blocks though, those tend to not be tuned correctly! :smiley:

Personally I like the art styling of Midjourney better than Dall-E. Have you tried that one?

1 Like

Not true! The problem is apparently they were trained by Sarah Silverman…

Who thought, you know what would be great!? If we trained the bots that are going to take over the world by forcing them to watch Saturday Night Live reruns… Because we wouldn’t want our robot overlords to think we were idiots or anything…

She’s funny, but her (class action) lawsuit is serious and–from this amateur’s perspective–has merit.

Especially when AI regurgitates copyrighted material (e.g., everything on any PDP I’ve personally created, including product descriptions and bullets, whether on Amazon or not) without permission or attribution, which a user then passes off as their own work, because the AI were “trained” on shadow libraries, internet content, and other people’s IP. :face_with_raised_eyebrow:

1 Like

By the plaintiff’s own allegation, the use clearly falls under Educational Use and Fair Use exceptions. I see no merit to the case. Just super cheap publicity, for a has-been B-list celebrity.

It’s a class action lawsuit, with many facets and claims.

Just because you personally don’t like that one member of the class action does not mean that the lawsuit is “clearly” this or without merit.

You seem determined to dismiss it without review, just based on your feelings about Sarah Silverman.

As I stated clearly, I am not a lawyer–and neither are you.

What feelings? She’s a very funny girl.

But let’s be brutally honest, Sandra Bullock she ain’t.

What a tough subject.

I love AI for spaceship piloting and auxiliary support systems but I don’t love it when it’s used to imitate others.

However, I have a tough time with the “large model” having copyrighted content that should be off-limits.

Speaking as someone with incredible charisma, I tend to “borrow” elements from various individuals whose essence I’ve captured and I use that when I am in social situations.

Is anything truly original? You may have a thought that you believe is your own, but it may have come from a huge variety of knowledge that you’ve acquired over your travels through life.

We are just large data model units ourselves, whether we want to admit that or not.

I do not like it when they can use AI to duplicate or recreate a person, though.

Think of how disrespectful it would be to use AI to “resurrect” Robin Williams and create “new” content using a database of all of his existing interviews and videos.

Not this one, you are referring to the Doe v. GitHub Inc against Microsoft for using Github to train Copilot.

Sarah Silverman is suing OpenAI and Meta—the creators of AI language models ChatGPT and LLaMA, respectively—for stealing information from her book The Bedwetter, according to a pair of lawsuits filed Friday in a U.S. District Court.

Silverman joins fellow authors Richard Kadrey and Christopher Golden in the class-action copyright lawsuits, which claim that that both ChatGPT and LLaMA were trained on their books without the writers’ permission. The suits also allege that the models were likely fed the books from “shadow library” databases like Library Genesis and Z-Library.


So Sandra Bullock’s lawsuit would have merit? :roll_eyes:

Regurgitating IP without citing source is bad, but the Google AI currently has a “hallucination” issue in that it will create non-existent references.

So just like people, AI is “learning” to lie (or create confident fallacy).

Issues with AI right now

  1. Is “learning” by pulling existing information from the internet
    a. Some information is legit
    b. Some information is incorrect
    c. Some information is BS
    d. Some information is specifically created to be false and/or damaging
  2. It is “predicting” what should come next based on what has been entered. It isn’t “thinking” per se, it is “gluing” together what it believes to be the “most relevant group of words”.
  3. It is capable of “lying” (creating 1b and 1c and maybe someday 1d)
4 Likes

Yes but…according to US Copyright Law, when you put it all together (fixed) in your own unique way (originality), that creation is then protected as your IP.

Copyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression. In copyright law, there are a lot of different types of works, including paintings, photographs, illustrations, musical compositions, sound recordings, computer programs, books, poems, blog posts, movies, architectural works, plays, and so much more!

Copyright is originality and fixation

Original Works

Works are original when they are independently created by a human author and have a minimal degree of creativity. Independent creation simply means that you create it yourself, without copying.

Fixed Works

A work is fixed when it is captured (either by or under the authority of an author) in a sufficiently permanent medium such that the work can be perceived, reproduced, or communicated for more than a short time. For example, a work is fixed when you write it down or record it.

2 Likes

The difference is if Sandra Bullock, an A-lister, wants/needs more publicity she goes out to Starbucks. Those poor B-listers have to work for it.

“Counsel for Individual and Representative Plaintiffs and the Proposed Class”

The case as of Friday had not been granted Status.

I think that’s where AI is a gray area. AI itself is a created work, but it’s a tool that can create new arrangements of work.

So if the main thing is that the work is original and not a copy, AI does pass that test frequently.

Writing satire in the style of Twain, as long as you don’t claim to be Twain, is something that I would be allowed to do as a human.

Is it really different if an AI does the same?