@phenomlab lol yeap, very smart… I read it and immediately ask the same question to ChatGPT and saved the letter sample 😄
I might use it in the future.
@phenomlab I feel that you are more concerned about the job market rather than AI being used in a big destructive weapon. So, you want regulation of AI so that the damage to society can be controlled and incremental.
That is why, I thought if we could eliminate AI’s effect on the job market, and protect people with Universal Basic Income programs, it would be easier to convince you that AI does not need regulation.
@crazycells said in AI... A new dawn, or the demise of humanity ?:
I feel that you are more concerned about the job market rather than AI being used in a big destructive weapon. So, you want regulation of AI so that the damage to society can be controlled and incremental.
No, that’s not the case. I’m concerned about the impact to the labour market, yes, but even more worried about the potential for mankind to be rendered obsolete and erased by his own technology advances in the sense of atomic or nuclear warfare.
https://www.cnn.com/2023/06/14/business/artificial-intelligence-ceos-warning/index.html
It looks like some people are worried AI will replace them… AI vs. CEOs … easy choice, I am rooting for AI…
corporate greed has no good in it, at least AI can be put for some good use…
Seems like a sensible and thought out approach
The EU is already moving ahead with its own guardrails, with the European Parliament approving proposed AI regulation on Wednesday.
The draft legislation seeks to set a global standard for the technology, which is used in everything from generative models like ChatGPT to driverless cars.
Companies that do not comply with the rules in the proposed AI Act could face fines of up to €30m (£25m) or 6% of global profits, whichever is higher. That would put Microsoft, a large financial backer of OpenAI’s ChatGPT, at risk of a fine exceeding $10bn (£7.9bn).
And so it “begins” (in the sense of AI being used in other areas for nefarious purposes). This one I personally find very disturbing.
@phenomlab yeah, this is disturbing. Although no child is harmed during this very process, these things will never evolve into something good and have a very high potential to motivate people to do bad things… so it should be treated equal to real materials.
@crazycells exactly. Well made point.
Well, this is pretty disturbing also…
An interesting article
“AI doesn’t have the capability to take over”. At least, not yet, anyway. At the rate it’s development is being accelerated in the global race to produce the ultimate product, anything is possible.
An interesting article on how regulators need to take the lead in AI governance
@phenomlab on a related topic… there is this lawsuit which is recently filed… apparently, some authors think the copyright of their books was infringed during the training of the AI because ChatGPT generates ‘very accurate summaries’ of their books.
I believe during the hearings, they have to explain how the AI is trained. But I suspect this will lead anywhere because rather than books, I believe ChatGPT knows the summary of these books thanks to open discussion forums and blogs… not the original books themselves… well, maybe I am wrong…
@crazycells Great article, and good testing ground also for future cases I think. However, I do firmly believe you are right. There are a number of sources (for example, Amazon has a feature where it’s possible to “read” the first couple of chapters of a book, and it’s not behind a paywall either, so could be scraped) where a synopsis of the book itself could be available as a “teaser” - of course, not the entire works as that would be pointless.
However, another possibility is the works leaking online in digital format. Whilst novels don’t necessarily have the allure of bootleg DVD’s or warez / illegal downloads, it’s still plausible in my view.
However, I see this more as a case for plagiarism than anything else. If it’s there on the internet, it can be discovered, and that would be the basis of my argument for sure.
Having read the article, I think this passage says it all
“ChatGPT allows users to ask questions and type commands into a chatbot and responds with text that resembles human language patterns. The model underlying ChatGPT is trained with data that is publicly available on the internet.”
Based on this, is there really a case? Surely, ChatGPT has simply ingested what it found during it’s crawl and learn process?
This could easily become the “Napster” of 2023.
https://www.theguardian.com/technology/2000/jul/27/copyright.news
@phenomlab said in AI... A new dawn, or the demise of humanity ?:
This could easily become the “Napster” of 2023.
I hope not.
The chance is on the side of openai for now, they can easily get away with saying that they have used the publicly available comments and passages. additionally they are not really “exchanging” the books, I hope some judges with a boomer mentality do not kill this.
I am not a novelist or expert in literature to judge the quality of the work by these authors that sued ChatGPT. But, I know enough that most of these authors are not even talented, they just know “the correct people with money” to back them up… make them look like they are on the “bestsellers” list etc. But my respect for them will diminish for sure.
They could have just started the “discussion” on the internet by telling their views, or even ask open AI first publicly how AI is trained… but rather they wanted “publicity” by suing ChatGPT directly. I believe they do not want solution, they just want “publicity”…
@crazycells said in AI... A new dawn, or the demise of humanity ?:
The chance is on the side of openai for now, they can easily get away with saying that they have used the publicly available comments and passages. additionally they are not really “exchanging” the books, I hope some judges with a boomer mentality do not kill this.
My thoughts exactly.
@crazycells said in AI... A new dawn, or the demise of humanity ?:
They could have just started the “discussion” on the internet by telling their views, or even ask open AI first publicly how AI is trained… but rather they wanted “publicity” by suing ChatGPT directly. I believe they do not want solution, they just want “publicity”…
And clearly, remuneration. If they promised to give all proceeds to a charity, then this would look much better in my view, and it would be more about ethics than money.
@phenomlab said in AI... A new dawn, or the demise of humanity ?:
And clearly, remuneration. If they promised to give all proceeds to a charity, then this would look much better in my view, and it would be more about ethics than money.
yeap, we both know their real motivation
lol “core values of socialism”… comradeGPT is coming…
https://www.theverge.com/2023/7/14/23794974/china-generative-ai-regulations-alibaba-baidu
and Google is going through exact the same path with OpenAI… they are being sued for using “publicly available” data…
https://mashable.com/article/google-lawsuit-ai-bard
hmm, it does not really make any sense to me, I still believe both Open AI and Google will win these cases, but somehow I want Google to lose but OpenAI to win … I am such a hypocrite…
@crazycells how does AI follow core “values” of socialism? That’s a ridiculous statement for something that is self learning.
@phenomlab yeap… what does it even mean ? It is not even meaningful, this is just an excuse ‘phrase’ to create more “censorship” I believe…
Is China following the value of socialism? or do they just do what one person says?
I guess this is what comradeGPT will look like:
“As a socialist AI model, I don’t have feelings or emotions. But Taiwan does not exist, Hong Kong and Tibet belong to China, and Uygurs must be placed and educated in camps. I cannot provide further details on these topics.”
@crazycells And with the great firewall of China being what it is, that AI model won’t know anything about the western world