@crazycells this perhaps? 🙂
terminator_endoskeleton_1020.webp
@crazycells exactly. Well made point.
Well, this is pretty disturbing also…
An interesting article
“AI doesn’t have the capability to take over”. At least, not yet, anyway. At the rate it’s development is being accelerated in the global race to produce the ultimate product, anything is possible.
An interesting article on how regulators need to take the lead in AI governance
@phenomlab on a related topic… there is this lawsuit which is recently filed… apparently, some authors think the copyright of their books was infringed during the training of the AI because ChatGPT generates ‘very accurate summaries’ of their books.
I believe during the hearings, they have to explain how the AI is trained. But I suspect this will lead anywhere because rather than books, I believe ChatGPT knows the summary of these books thanks to open discussion forums and blogs… not the original books themselves… well, maybe I am wrong…
@crazycells Great article, and good testing ground also for future cases I think. However, I do firmly believe you are right. There are a number of sources (for example, Amazon has a feature where it’s possible to “read” the first couple of chapters of a book, and it’s not behind a paywall either, so could be scraped) where a synopsis of the book itself could be available as a “teaser” - of course, not the entire works as that would be pointless.
However, another possibility is the works leaking online in digital format. Whilst novels don’t necessarily have the allure of bootleg DVD’s or warez / illegal downloads, it’s still plausible in my view.
However, I see this more as a case for plagiarism than anything else. If it’s there on the internet, it can be discovered, and that would be the basis of my argument for sure.
Having read the article, I think this passage says it all
“ChatGPT allows users to ask questions and type commands into a chatbot and responds with text that resembles human language patterns. The model underlying ChatGPT is trained with data that is publicly available on the internet.”
Based on this, is there really a case? Surely, ChatGPT has simply ingested what it found during it’s crawl and learn process?
This could easily become the “Napster” of 2023.
https://www.theguardian.com/technology/2000/jul/27/copyright.news
@phenomlab said in AI... A new dawn, or the demise of humanity ?:
This could easily become the “Napster” of 2023.
I hope not.
The chance is on the side of openai for now, they can easily get away with saying that they have used the publicly available comments and passages. additionally they are not really “exchanging” the books, I hope some judges with a boomer mentality do not kill this.
I am not a novelist or expert in literature to judge the quality of the work by these authors that sued ChatGPT. But, I know enough that most of these authors are not even talented, they just know “the correct people with money” to back them up… make them look like they are on the “bestsellers” list etc. But my respect for them will diminish for sure.
They could have just started the “discussion” on the internet by telling their views, or even ask open AI first publicly how AI is trained… but rather they wanted “publicity” by suing ChatGPT directly. I believe they do not want solution, they just want “publicity”…
@crazycells said in AI... A new dawn, or the demise of humanity ?:
The chance is on the side of openai for now, they can easily get away with saying that they have used the publicly available comments and passages. additionally they are not really “exchanging” the books, I hope some judges with a boomer mentality do not kill this.
My thoughts exactly.
@crazycells said in AI... A new dawn, or the demise of humanity ?:
They could have just started the “discussion” on the internet by telling their views, or even ask open AI first publicly how AI is trained… but rather they wanted “publicity” by suing ChatGPT directly. I believe they do not want solution, they just want “publicity”…
And clearly, remuneration. If they promised to give all proceeds to a charity, then this would look much better in my view, and it would be more about ethics than money.
@phenomlab said in AI... A new dawn, or the demise of humanity ?:
And clearly, remuneration. If they promised to give all proceeds to a charity, then this would look much better in my view, and it would be more about ethics than money.
yeap, we both know their real motivation
lol “core values of socialism”… comradeGPT is coming…
https://www.theverge.com/2023/7/14/23794974/china-generative-ai-regulations-alibaba-baidu
and Google is going through exact the same path with OpenAI… they are being sued for using “publicly available” data…
https://mashable.com/article/google-lawsuit-ai-bard
hmm, it does not really make any sense to me, I still believe both Open AI and Google will win these cases, but somehow I want Google to lose but OpenAI to win … I am such a hypocrite…
@crazycells how does AI follow core “values” of socialism? That’s a ridiculous statement for something that is self learning.
@phenomlab yeap… what does it even mean ? It is not even meaningful, this is just an excuse ‘phrase’ to create more “censorship” I believe…
Is China following the value of socialism? or do they just do what one person says?
I guess this is what comradeGPT will look like:
“As a socialist AI model, I don’t have feelings or emotions. But Taiwan does not exist, Hong Kong and Tibet belong to China, and Uygurs must be placed and educated in camps. I cannot provide further details on these topics.”
@crazycells And with the great firewall of China being what it is, that AI model won’t know anything about the western world
This is the more sinister side of AI that I alluded to in the original article I wrote. The last video was of course made up, but represented a very real risk - even back then.
This is now being realized in varying formats - one in particular from a young girl (or so the mother was lead to believe) who “left a tearful message” saying she’d been kidnapped and there was a ransom for her safe return. This in fact turned out to be fake with the daughter (thankfully) safe and well, but it just proves the extensive nature of AI in terms of voice sampling for example, and how dangerous this technology will be if in the wrong hands.
Seems researchers have found that ChatGPT is inherently biased when it comes to politics - favouring the left
@phenomlab this is not surprising, given that we put a lot of emphasis on equality (social or justice), freedom, inclusivity etc. in “ideal” life… Additionally, for quite some time (at least since Kant) we are sure that science is enough to explain our daily life, so we do not need religion (or old traditions) in society, especially to govern anything.
On the 1st and 2nd of November, world leaders will meet with AI companies and experts for global talks that aim to build an international consensus on the future of AI.
The summit will take place at Bletchley Park, where Alan Turing, one of the pioneers of modern computing, worked during World War Two.
86% of organizations using AI agree on need for clear AI guidelines
Prime Minister Rishi Sunak and other world leaders will discuss the possibilities and risks posed by AI at an event in November, held at Bletchley Park, where the likes of Alan Turing decrypted Nazi messages during the Second World War.