Yes that is very awesome and I really like that idea! Great job with thinking that up!!
AI... A new dawn, or the demise of humanity ?
-
@crazycells said in AI... A new dawn, or the demise of humanity ?:
Are we trying to prevent deadly weapons from being built or are we trying to prevent AI from being part of it
I guess it’s both. Man has always had the ability to create extremely dangerous weapons (think H and Atom Bombs, VX Gas, Anthrax etc) but I think AI has the ability to rapidly enhance research, engineering, and overall development, and also speed up the testing process using simulated modeling.
I can’t see anyone using ChatGPT to ask for a recipe for Polonium 210, but those who develop their own AI powered research engines could easily fall under the radar and remain undetected. The immediate concern are terrorist networks and secretive states such as North Korea where what is already an imminent threat could be amplified many times and accelerated using AI.
-
@phenomlab I always feel that these are the games played by big countries that aim to prevent smaller countries from catching up with them, just like how big companies are doing… that is why I do not embrace it completely… I am pretty sure new regulations will not be respected by people who have bad intentions or countries anyway…
AI is not inherently bad or good, it is the people who are using it.
-
@crazycells said in AI... A new dawn, or the demise of humanity ?:
AI is not inherently bad or good, it is the people who are using it.
Yes, very true. The same analogy is applied to guns, and that’s also valid. A gun without an operator cannot harm anyone. However, AI differs in the sense that it is connected to a number of neural networks that could potentially make decisions without human interaction or approval.
I’m not going all Hollywood with this, but that possibility already exists.
AI isn’t new technology - it’s been around since the dawn of supercomputers such as Cray, ASCII blue and ASCII white, and even back then, had the capacity of beating a world renowned chess champion in the guise of IBM Deep Blue.
That was 1996 and 1997, and 26 years is certainly enough for both hardware and software development, along with decent research and engineering budgets to create the next generation of intelligence. This new interconnected intelligence has an array of existing and emerging technologies that can further bolster it’s reach.
AI is developing at an alarming rate, and without regulation, will rapidly become a wild west of deep fakes, AI generated music that puts artists out of business (see below), and so much more.
I’m not anti AI - in fact, I fully support it in the right controlled environments, which have endless possibilities to enrich provided it is not abused or leveraged for nefarious purposes.
-
@phenomlab yes, but it is unfair to compare AI with guns (whose primary purpose is to kill).
I believe it would be easier to convince you if we had UBI so, as long as people were not out of business because of AI, you would support any kind of development.
-
@crazycells said in AI... A new dawn, or the demise of humanity ?:
yes, but it is unfair to compare AI with guns (whose primary purpose is to kill).
That’s true - it wasn’t a comparison - more of an alignment.
-
@crazycells said in AI... A new dawn, or the demise of humanity ?:
I believe it would be easier to convince you if we had UBI so, as long as people were not out of business because of AI, you would support any kind of development.
Can you elaborate a bit more on this ? I’m not sure what you’re referring to ?
-
@phenomlab I feel that you are more concerned about the job market rather than AI being used in a big destructive weapon. So, you want regulation of AI so that the damage to society can be controlled and incremental.
That is why, I thought if we could eliminate AI’s effect on the job market, and protect people with Universal Basic Income programs, it would be easier to convince you that AI does not need regulation.
-
@crazycells said in AI... A new dawn, or the demise of humanity ?:
I feel that you are more concerned about the job market rather than AI being used in a big destructive weapon. So, you want regulation of AI so that the damage to society can be controlled and incremental.
No, that’s not the case. I’m concerned about the impact to the labour market, yes, but even more worried about the potential for mankind to be rendered obsolete and erased by his own technology advances in the sense of atomic or nuclear warfare.
-
https://www.cnn.com/2023/06/14/business/artificial-intelligence-ceos-warning/index.html
It looks like some people are worried AI will replace them… AI vs. CEOs … easy choice, I am rooting for AI…
corporate greed has no good in it, at least AI can be put for some good use…
-
Seems like a sensible and thought out approach
The EU is already moving ahead with its own guardrails, with the European Parliament approving proposed AI regulation on Wednesday.
The draft legislation seeks to set a global standard for the technology, which is used in everything from generative models like ChatGPT to driverless cars.
Companies that do not comply with the rules in the proposed AI Act could face fines of up to €30m (£25m) or 6% of global profits, whichever is higher. That would put Microsoft, a large financial backer of OpenAI’s ChatGPT, at risk of a fine exceeding $10bn (£7.9bn).
-
And so it “begins” (in the sense of AI being used in other areas for nefarious purposes). This one I personally find very disturbing.
-
@phenomlab yeah, this is disturbing. Although no child is harmed during this very process, these things will never evolve into something good and have a very high potential to motivate people to do bad things… so it should be treated equal to real materials.
-
@crazycells exactly. Well made point.
-
Well, this is pretty disturbing also…
-
An interesting article
“AI doesn’t have the capability to take over”. At least, not yet, anyway. At the rate it’s development is being accelerated in the global race to produce the ultimate product, anything is possible.
-
An interesting article on how regulators need to take the lead in AI governance
-
@phenomlab on a related topic… there is this lawsuit which is recently filed… apparently, some authors think the copyright of their books was infringed during the training of the AI because ChatGPT generates ‘very accurate summaries’ of their books.
I believe during the hearings, they have to explain how the AI is trained. But I suspect this will lead anywhere because rather than books, I believe ChatGPT knows the summary of these books thanks to open discussion forums and blogs… not the original books themselves… well, maybe I am wrong…
-
@crazycells Great article, and good testing ground also for future cases I think. However, I do firmly believe you are right. There are a number of sources (for example, Amazon has a feature where it’s possible to “read” the first couple of chapters of a book, and it’s not behind a paywall either, so could be scraped) where a synopsis of the book itself could be available as a “teaser” - of course, not the entire works as that would be pointless.
However, another possibility is the works leaking online in digital format. Whilst novels don’t necessarily have the allure of bootleg DVD’s or warez / illegal downloads, it’s still plausible in my view.
However, I see this more as a case for plagiarism than anything else. If it’s there on the internet, it can be discovered, and that would be the basis of my argument for sure.
Having read the article, I think this passage says it all
“ChatGPT allows users to ask questions and type commands into a chatbot and responds with text that resembles human language patterns. The model underlying ChatGPT is trained with data that is publicly available on the internet.”
Based on this, is there really a case? Surely, ChatGPT has simply ingested what it found during it’s crawl and learn process?
This could easily become the “Napster” of 2023.
https://www.theguardian.com/technology/2000/jul/27/copyright.news
-
@phenomlab said in AI... A new dawn, or the demise of humanity ?:
This could easily become the “Napster” of 2023.
I hope not.
The chance is on the side of openai for now, they can easily get away with saying that they have used the publicly available comments and passages. additionally they are not really “exchanging” the books, I hope some judges with a boomer mentality do not kill this.
I am not a novelist or expert in literature to judge the quality of the work by these authors that sued ChatGPT. But, I know enough that most of these authors are not even talented, they just know “the correct people with money” to back them up… make them look like they are on the “bestsellers” list etc. But my respect for them will diminish for sure.
They could have just started the “discussion” on the internet by telling their views, or even ask open AI first publicly how AI is trained… but rather they wanted “publicity” by suing ChatGPT directly. I believe they do not want solution, they just want “publicity”…
-
@crazycells said in AI... A new dawn, or the demise of humanity ?:
The chance is on the side of openai for now, they can easily get away with saying that they have used the publicly available comments and passages. additionally they are not really “exchanging” the books, I hope some judges with a boomer mentality do not kill this.
My thoughts exactly.
@crazycells said in AI... A new dawn, or the demise of humanity ?:
They could have just started the “discussion” on the internet by telling their views, or even ask open AI first publicly how AI is trained… but rather they wanted “publicity” by suing ChatGPT directly. I believe they do not want solution, they just want “publicity”…
And clearly, remuneration. If they promised to give all proceeds to a charity, then this would look much better in my view, and it would be more about ethics than money.
Related Topics
-
-
-
-
Link vs Refresh
Solved Customisation -
-
-
-