@phenomlab lol yeap, very smart… I read it and immediately ask the same question to ChatGPT and saved the letter sample 😄
I might use it in the future.
And whilst it looks very much like I’m trying to hammer home a point here, see the below. Clearly, I’m not the only one concerned at the rate of AI’s development, and the consequences if not managed properly.
@phenomlab thanks for sharing. I will watch this.
no worries I do not have a high opinion about human race, human greed wins each time, so I always feel it will be futile to resist. we are just one of the billions of species around us. thanks to evolution, our genes make us selfish creatures, but even if there is a catastrophe, I am pretty sure there will be at least a handful of survivors to continue.
@phenomlab maybe I did not understand it well, but I do not share the opinions of this article. Are we trying to prevent deadly weapons from being built or are we trying to prevent AI from being part of it
Regulations might (and probably will) be bent by individual countries secretly. So, what will happen then?
@crazycells said in AI... A new dawn, or the demise of humanity ?:
but I do not share the opinions of this article.
Which specific article? This in it’s entirety, or one specific post?
@phenomlab oh sorry, I meant to quote this one:
@crazycells said in AI... A new dawn, or the demise of humanity ?:
Are we trying to prevent deadly weapons from being built or are we trying to prevent AI from being part of it
I guess it’s both. Man has always had the ability to create extremely dangerous weapons (think H and Atom Bombs, VX Gas, Anthrax etc) but I think AI has the ability to rapidly enhance research, engineering, and overall development, and also speed up the testing process using simulated modeling.
I can’t see anyone using ChatGPT to ask for a recipe for Polonium 210, but those who develop their own AI powered research engines could easily fall under the radar and remain undetected. The immediate concern are terrorist networks and secretive states such as North Korea where what is already an imminent threat could be amplified many times and accelerated using AI.
@phenomlab I always feel that these are the games played by big countries that aim to prevent smaller countries from catching up with them, just like how big companies are doing… that is why I do not embrace it completely… I am pretty sure new regulations will not be respected by people who have bad intentions or countries anyway…
AI is not inherently bad or good, it is the people who are using it.
@crazycells said in AI... A new dawn, or the demise of humanity ?:
AI is not inherently bad or good, it is the people who are using it.
Yes, very true. The same analogy is applied to guns, and that’s also valid. A gun without an operator cannot harm anyone. However, AI differs in the sense that it is connected to a number of neural networks that could potentially make decisions without human interaction or approval.
I’m not going all Hollywood with this, but that possibility already exists.
AI isn’t new technology - it’s been around since the dawn of supercomputers such as Cray, ASCII blue and ASCII white, and even back then, had the capacity of beating a world renowned chess champion in the guise of IBM Deep Blue.
That was 1996 and 1997, and 26 years is certainly enough for both hardware and software development, along with decent research and engineering budgets to create the next generation of intelligence. This new interconnected intelligence has an array of existing and emerging technologies that can further bolster it’s reach.
AI is developing at an alarming rate, and without regulation, will rapidly become a wild west of deep fakes, AI generated music that puts artists out of business (see below), and so much more.
I’m not anti AI - in fact, I fully support it in the right controlled environments, which have endless possibilities to enrich provided it is not abused or leveraged for nefarious purposes.
@phenomlab yes, but it is unfair to compare AI with guns (whose primary purpose is to kill).
I believe it would be easier to convince you if we had UBI so, as long as people were not out of business because of AI, you would support any kind of development.
@crazycells said in AI... A new dawn, or the demise of humanity ?:
yes, but it is unfair to compare AI with guns (whose primary purpose is to kill).
That’s true - it wasn’t a comparison - more of an alignment.
@crazycells said in AI... A new dawn, or the demise of humanity ?:
I believe it would be easier to convince you if we had UBI so, as long as people were not out of business because of AI, you would support any kind of development.
Can you elaborate a bit more on this ? I’m not sure what you’re referring to ?
@phenomlab I feel that you are more concerned about the job market rather than AI being used in a big destructive weapon. So, you want regulation of AI so that the damage to society can be controlled and incremental.
That is why, I thought if we could eliminate AI’s effect on the job market, and protect people with Universal Basic Income programs, it would be easier to convince you that AI does not need regulation.
@crazycells said in AI... A new dawn, or the demise of humanity ?:
I feel that you are more concerned about the job market rather than AI being used in a big destructive weapon. So, you want regulation of AI so that the damage to society can be controlled and incremental.
No, that’s not the case. I’m concerned about the impact to the labour market, yes, but even more worried about the potential for mankind to be rendered obsolete and erased by his own technology advances in the sense of atomic or nuclear warfare.
https://www.cnn.com/2023/06/14/business/artificial-intelligence-ceos-warning/index.html
It looks like some people are worried AI will replace them… AI vs. CEOs … easy choice, I am rooting for AI…
corporate greed has no good in it, at least AI can be put for some good use…
Seems like a sensible and thought out approach
The EU is already moving ahead with its own guardrails, with the European Parliament approving proposed AI regulation on Wednesday.
The draft legislation seeks to set a global standard for the technology, which is used in everything from generative models like ChatGPT to driverless cars.
Companies that do not comply with the rules in the proposed AI Act could face fines of up to €30m (£25m) or 6% of global profits, whichever is higher. That would put Microsoft, a large financial backer of OpenAI’s ChatGPT, at risk of a fine exceeding $10bn (£7.9bn).
And so it “begins” (in the sense of AI being used in other areas for nefarious purposes). This one I personally find very disturbing.
@phenomlab yeah, this is disturbing. Although no child is harmed during this very process, these things will never evolve into something good and have a very high potential to motivate people to do bad things… so it should be treated equal to real materials.
@crazycells exactly. Well made point.
Well, this is pretty disturbing also…
An interesting article
“AI doesn’t have the capability to take over”. At least, not yet, anyway. At the rate it’s development is being accelerated in the global race to produce the ultimate product, anything is possible.