@crazycells said in How long before AI takes over your job?:
sponsored content
To me, this is the method to get yourself to the top of the list. Unfair advantage doesn’t even properly describe it.
@phenomlab said in AI... A new dawn, or the demise of humanity ?:
They say a picture paints a thousand words…
LOL - yes, in the making LOL
And this is certainly interesting. I came across this on Sky News this morning
Seems even someone considered the “Godfather of AI” has quit, and is now raising concerns around privacy and jobs (and we’re not talking about Steve here either )
Here’s another article of interest relating to the same subject
And the quote which says it all
“The labs themselves say this could pose an existential threat to humanity,” said Mr Mostaque
A cause for concern? Absolutely.
A rare occasion where I actually agree with Elon Musk
Some interesting quotes from that article
“So just having more advanced weapons on the battlefield that can react faster than any human could is really what AI is capable of.”
“Any future wars between advanced countries or at least countries with drone capability will be very much the drone wars.”
When asked if AI advances the end of an empire, he replied: “I think it does. I don’t think (AI) is necessary for anything that we’re doing.”
This is also worth watching.
This further bolsters my view that AI needs to be regulated.
Google CEO Sundar Pichai admits AI dangers ‘keep me up at night’
This is an interesting admission from China - a country typically with a cavalier attitude to emerging tech
And here - Boss of AI firm’s ‘worst fears’ are more worrying than creepy Senate party trick
US politicians fear artificial intelligence (AI) technology is like a “bomb in a china shop”. And there was worrying evidence at a Senate committee on Tuesday from the industry itself that the tech could “cause significant harm”.
An interesting argument, but with little foundation in my view
“But many of our ingrained fears and worries also come from movies, media and books, like the AI characterisations in Ex Machina, The Terminator, and even going back to Isaac Asimov’s ideas which inspired the film I, Robot.”
Interesting - certainly food for thought
https://codecool.com/en/blog/is-ai-our-friend-or-worst-enemy-4-4-arguments-to-decide/
@phenomlab yeap, but no need to fear this might even be better for humanity, since they will have a “common enemy” to fight against, so, maybe instead of fighting with each other, they will unite.
In general, I do not embrace anthropocentric views well… and since human greed and money will determine how this will end, we all can guess what will happen…
so sorry to say this mates, but if there is a robot uprising, I will sell out human race hard
@crazycells I understand your point - albeit selling out the human race as that would include you
There’s a great video on YouTube that goes into more depth (along with the “Slaughterbots” video in the first post) that I think is well worth watching. Unfortunately, it’s over an hour long, but does go into specific detail around the concerns. My personal concern is not one of having my job replaced by a machine - more about my existence.
And whilst it looks very much like I’m trying to hammer home a point here, see the below. Clearly, I’m not the only one concerned at the rate of AI’s development, and the consequences if not managed properly.
@phenomlab thanks for sharing. I will watch this.
no worries I do not have a high opinion about human race, human greed wins each time, so I always feel it will be futile to resist. we are just one of the billions of species around us. thanks to evolution, our genes make us selfish creatures, but even if there is a catastrophe, I am pretty sure there will be at least a handful of survivors to continue.
@phenomlab maybe I did not understand it well, but I do not share the opinions of this article. Are we trying to prevent deadly weapons from being built or are we trying to prevent AI from being part of it
Regulations might (and probably will) be bent by individual countries secretly. So, what will happen then?
@crazycells said in AI... A new dawn, or the demise of humanity ?:
but I do not share the opinions of this article.
Which specific article? This in it’s entirety, or one specific post?
@phenomlab oh sorry, I meant to quote this one:
@crazycells said in AI... A new dawn, or the demise of humanity ?:
Are we trying to prevent deadly weapons from being built or are we trying to prevent AI from being part of it
I guess it’s both. Man has always had the ability to create extremely dangerous weapons (think H and Atom Bombs, VX Gas, Anthrax etc) but I think AI has the ability to rapidly enhance research, engineering, and overall development, and also speed up the testing process using simulated modeling.
I can’t see anyone using ChatGPT to ask for a recipe for Polonium 210, but those who develop their own AI powered research engines could easily fall under the radar and remain undetected. The immediate concern are terrorist networks and secretive states such as North Korea where what is already an imminent threat could be amplified many times and accelerated using AI.
@phenomlab I always feel that these are the games played by big countries that aim to prevent smaller countries from catching up with them, just like how big companies are doing… that is why I do not embrace it completely… I am pretty sure new regulations will not be respected by people who have bad intentions or countries anyway…
AI is not inherently bad or good, it is the people who are using it.
@crazycells said in AI... A new dawn, or the demise of humanity ?:
AI is not inherently bad or good, it is the people who are using it.
Yes, very true. The same analogy is applied to guns, and that’s also valid. A gun without an operator cannot harm anyone. However, AI differs in the sense that it is connected to a number of neural networks that could potentially make decisions without human interaction or approval.
I’m not going all Hollywood with this, but that possibility already exists.
AI isn’t new technology - it’s been around since the dawn of supercomputers such as Cray, ASCII blue and ASCII white, and even back then, had the capacity of beating a world renowned chess champion in the guise of IBM Deep Blue.
That was 1996 and 1997, and 26 years is certainly enough for both hardware and software development, along with decent research and engineering budgets to create the next generation of intelligence. This new interconnected intelligence has an array of existing and emerging technologies that can further bolster it’s reach.
AI is developing at an alarming rate, and without regulation, will rapidly become a wild west of deep fakes, AI generated music that puts artists out of business (see below), and so much more.
I’m not anti AI - in fact, I fully support it in the right controlled environments, which have endless possibilities to enrich provided it is not abused or leveraged for nefarious purposes.