Yay or Nay on AI?

Just because ChatGPT said it, doesn’t mean it’s true. But the opposite is also true.

Just because Chat GPT said it doesn’t mean it’s untrue. So, what part is untrue and why?

3 Likes

Depends on how the assertion is applied. One does not imply that using a Newton quote to define gravity is an appeal to authority if the same conclusion can be reached outside of the Newton quote.

Even if true and if an appeal to authority fallacy existed, there is still the core issue if you are creating a fallacy fallacy, meaning that just because a fallacy exists or may exist, does not make the assertion true/false.
And down the spiral we go… :slight_smile:

2 Likes

IF I have said because chatgpt said it, therefore it is true. Then yes. But I did not say that. The core issue is: If I address your point/statement completely and then you respond with peripheral points that distract from the original point without addressing your own core statement or what follows from it - that is a red herring. I made a statement about something not being true - the response was it is true - I followed up with why the original assertion was not true with a source I’ve been accused of relying on to formulate my arguments and therefore used it to formulate my argument. The onus lies on the one who makes a claim and I satisfied both the claim and the reason - which I’m then not graced with, then, in a few discussions. Instead, I’ve given additional concerns which obfuscate from the main claim. One can make the claim, demonstrate why the claim is true and then add additional points. But that does not happen.

Is any of this important right now? No. Happy Turkey day to you all.

Much love.
:heart_hands:

3 Likes

Logic isn’t about true or not true. Something can be illogical but true at the same time or logical but false at the same time.

1 Like

You still seem to be implying that what ChatGPT said is untrue. So, please explain why. I am curious as to your reasoning. I have been asking all those who question ChatGPT’s logic for this thread, yet still no answer.

The collective you have been saying that it is ChatGPT, thus, it is “not a credible source of information” or accusing those who agree that we are using “ChatGPT as the expert for your logic.” However, no one has said why ChatGPT is wrong.

If ChatGPT is so incredibly wrong, why is everything else about it being attacked except for its very answer? If it is wrong, should it not be easy for the fallacy to be the first thing you challenge?

3 Likes

I’m not implying anything about ChatGPT. I’m implying that stating what ChatGPT said is correct because it comes from ChatGPT is a fallacy.

Logic is a framework of discussion that has nothing to do with something being right or wrong, it’s about the structure of the discussion.

A red herring is a logical fallacy. An appeal to authority is a logical fallacy. It has nothing to do with ChatGPT itself, it’s the framework of saying something is correct because it came from ChatGPT.

1 Like

Except no one did that. This is precisely what Lake does repeatedly. You’re arguing against something no one did/said.

No one said because the argument came from Chatgpt therefore it is correct. In which case, you would be right had there been not argument and only an appeal. But that didn’t happen.

There was an argument made in support of the main claim. Namely: saying something is more affordable by raising the price for everyone else is not factually true - it is relatively true. The argument went further by using an analogy of making someone taller by placing everyone else around them in a hole. I used Chatgpt to demonstrate why the PR about making National Park passes more affordable for Americans was patently false. An appeal would be sans explanation. That did not happen.

Again, for the nth time, explain why what Chatgpt said is untrue.

3 Likes

As @Tried_Tested said, no one said that. Instead of challenging the statement, you challenged something that no one here said, which is itself a fallacy. So, if we have been misled by ChatGPT – if ChatGPT is wrong – please tell us how.

3 Likes

ChatGpt is no more accurate than a google search. It is trained on unverified data.

I use Co-pilot from time to time. Co-pilot is Microsoft’s version of ChatGpt. It provides the sites whose data its response is based upon.

I can decide whether the answer is likely to be total crap or partial crap based on its sites, just as I could when I do a google search. If the source is wikipedia, it is not a reliable source,

If the source is not a prime source of the data, it is probably not a reliable source.

An example, there were hundreds of articles on websites about a Pew Research Study which claimed atheists know more about religion than believers.

With great effort, I found the actual Pew site with the study. Pew is a reputable company but this study did not show what the articles claimed. The claim was based on someone’s reading of the study. That person failed to take into account Pew’s disclaimers, study methodology, and identification of methodology which was atypical.

AI is only as good as the data it is trained on. The Internet search data which is used to train may include a single site which is a prime source and thousands of sites which are interpretations of the prime site, many of which are interpreted to fit an agenda which was not that of the collectors of the data.

The use of ChatGpt as a source opens questions about the competence of the person who uses it. It does not increase the chances of the statement being correct. It is opinion just as a statement based on an individuals personal knowledge, but is second hand unverifiable data instead of first hand information.

3 Likes

:wink: Uh oh…

3 Likes

Seconded - and I’d go further to suggest that the ever-accumulating evidence, of increasing numbers of us who aren’t particularly inclined towards adhering to the crucial need for using scientific methodology & academic rigor in our analytically-determinative thinking processes, portends an ill wind blowing…

I remain fond of the hashtag #TeachYourChildrenTRUTH - but I would not recommend relying upon the output of any LLM for doing so, other than the naturally-equipped one already built into one’s own hat-rack head

1 Like

My identification of Co-pilot as a source was a warning that this is questionable data.

It was not as an authority,

Sorry if you missed the intent.

3 Likes

So tell us again what part of its answer within this thread is inaccurate. It is so inaccurate that you have yet to identify the inaccuracy.

The debate here is on the economy (at least it was until Papy moved the discussion), not on whether ChatGPT is accurate. Do not change the topic of conversation. ChatGPT was used to help generate an answer. If the answer is incorrect, then challenge the answer. I would gladly yield to you if you could identify its mistake.

3 Likes

A Nay vote from me on the 120225 “About AWS” blog post announcing the Nova 2 family of dufuses:

Announcing Amazon Nova 2 foundation models now available in Amazon Bedrock - AWS

Early signs that this launch was coming seemed to’ve fueled some fairly-broad speculation among certain industry observers re: last month’s AWS outage perhaps having been caused by AI implementations run amuck; I wonder how much of a kernel of truth there might be in that notion?

2 Likes

Even if the attribution was to Ai (which I doubt), but let’s assume it was - it would still be human error - writing elaborate redundancies is a human problem not an Ai problem. In order for Ai to understand every single scenario of such a situation, the specific Ai would need to have enough awareness of itself and the business to enact such a forecast - so far, from what I’ve seen - Ai is not being used as such.

1 Like

To be fair, available evidence suggests that implementations like AWS Agentic AI (link) are already being used for doing that (often in conjunction w/ RAG [“retrieval augmented generation”] adjuncts) in some cases; I suspect that the new Nova 2 family (link) is almost certainly going to see moves in the same direction.

1 Like

I just had an experience that can show just how useful AI can be.

I need to go to the produce store, so checked Google to see how busy the store I go to every week is (“a bit busier than usual” was the result). But I also saw the AI summary, telling me that the store is now closed and out of business; and in fact, closed over 2 years ago!

I’m glad no one at the store has heard about it!

6 Likes

Not sure how much of this is AI and how much is just tech in general, but I’m glad I’m not in CA!

Can I order a REAL taxi???

2 Likes

This was real - had a few folks who went to the city during the blackout and it was pitch black and driving around waymo taxis that were dead was quite apparent and they had to drive around them.

3 Likes

IDK; I was thankfully in NJ when the big Northeast blackout hit 20+ years ago, but know people who were in NYC when it hit. Compared to being stuck on a subway or elevator (or for some people I know, the horror of being stuck on an escalator), and the general traffic issues when the traffic lights are out, dealing with stopped robot taxi seems rather minor in comparison. And probably far better than dealing with a robot taxi that is NOT stopped, but doesn’t know what it’s doing.

On the bright side, at least they’re not all honking at each other at 3:00 am trying to park!

1 Like