Yes that is very awesome and I really like that idea! Great job with thinking that up!!
AI... A new dawn, or the demise of humanity ?
-
@phenomlab you are right, that is a possibility…
is there any way to make these devices one-way transmission only? or that is also not a good solution?
-
@crazycells they’d have to be transmit and receive to work properly I think, so bidirectional.
There would need to be a secret passphrase between the two devices at the very least to secure the communication channel.
-
@crazycells Found this - a good read, and completely plausible
https://cisomag.com/how-brainjacking-became-a-new-cybersecurity-risk-in-health-care/
What is Brainjacking?
Brainjacking is a kind of cyberattack in which a hacker obtains unauthorized access to neural implants in a human body. Hacking surgically implanted devices in a human brain could allow an attacker to control the patient’s cognition and functions, potentially resulting in drastic consequences.
Brain implants also referred to as neural implants, are microchips that connect directly to a human’s brain to establish a brain-computer interface (BCI) in the brain that has become dysfunctional due to medical issues.
-
@phenomlab well, yeah, I guess this is possible, and you would know much more about this since you are a security expert. They have to take a lot of precautions for this not to happen. I am only talking about the biology perspective, and to me, this hacking would more likely affect the movements of the muscles only rather than cause any cognitive effect. Because this device is basically just sensing the activity on brain and interprets it and relay the information as movement… I am not sure if this is working in reverse… I do not think they have showed us this so far…
-
-
Interesting. London has been identified as a huge talent pool for AI by Microsoft.
-
@phenomlab nice… good for London… probably the best city in Europe for this purpose…
I recognize Mustafa Suleyman’s name from Deepmind, the creator of AlphaZero, AlphaGo, and AlphaFold… Apparently he is captured by Microsoft recently
Edit: well, I forgot to mention, his name is very Turkish so I wondered his background at the time, it turns out his father is Syrian… (just like Steve Jobs lol)
-
@phenomlab said in AI... A new dawn, or the demise of humanity ?:
Seems this specific issue is becoming more widespread
-
@phenomlab yeah, unfortunately I heard about this being used in some high schools, people are creating fake explicit images of their classmates…
hopefully, this law becomes more common.
although thinking about it… why is it not criminal offense already? you are trying to harm someone and doing something actively to achieve this (spreading wrong info and cause reputation damage)…
maybe there were holes in the defamation related laws, and hopefully this will close all the loopholes…
-
@crazycells Agree. The distribution of Revenge Porn is already illegal, so why is this not? it’s technically along the same lines?
-
Here’s an interesting topic.
The perpetrator of this crime used AI to fake an anti-sematic message which was then circulated widely and placed the victim of the impersonation (and family) at high risk of harm.
Scott Shellenberger, the Baltimore County state’s attorney, added: “We also need to take a broader look at how this technology can be used and abused to harm other people.”
Not the first, or the last.
-
This is extremely worrying
This quote alone literally gives me the creeps
Air force boss Frank Kendall was so impressed he said he’d trust it with deciding whether to launch weapons in war.
This is a seriously stupid statement in my view and there’s no way I’d ever trust something like AI to make decisions on using weapons or not.
It seems that this is a serious concern shared by others
Arms control experts and humanitarian groups are concerned that AI might one day be able to autonomously drop bombs without further human consultation and are seeking restrictions on its use.
Clearly, the last video in the first article on this thread should show how we should heed the warning in relation to the risks involved.
-
Ouchhh, bad bad bad
-
@DownPW yes, very. Shocking that the US military actually thinks this is a good idea. I mean, without going all “Hollywood” in terms of The Terminator franchise, there is a serious risk to human life here.
-
“I could see my face and hear my voice. But it was all very creepy, because I saw myself saying things that I never said,” the 21-year-old, who studies at the University of Pennsylvania, told the BBC.
https://www.bbc.co.uk/news/articles/c25rre8ww57o
Statistics disclosed by the public security department in 2023 show authorities arrested 515 individuals for “AI face swap” activities. Chinese courts have also handled cases in this area.
-
SCARLETT JOHANSSON’S STATEMENT IN FULL
Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system.
He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI.
He said he felt that my voice would be comforting to people.
After much consideration and for personal reasons, I declined the offer.
Nine months later, my friends, family and the general public all noted how much the newest system named “Sky” sounded like me.
When I heard the released demo, I was shocked, angered and in disbelief that Mr Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.
Mr Altman even insinuated that the similarity was intentional, tweeting a single word “her” - a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.
Two days before the ChatGPT 4.0 demo was released, Mr Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there.
As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr Altman and OpenAI, setting out what they had done and asking them to detail the exact process by which they created the “Sky” voice. Consequently, OpenAI reluctantly agreed to take down the “Sky” voice.
In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity.
I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.
-
Glue cheese to pizza and eat rocks?? Wow.
-
Big Brother is watching you. Literally, and making incorrect decisions in the process
Related Topics
-
-
-
-
Link vs Refresh
Solved Customisation -
-
-
-