Yes that is very awesome and I really like that idea! Great job with thinking that up!!
AI... A new dawn, or the demise of humanity ?
-
@crazycells Agree. The distribution of Revenge Porn is already illegal, so why is this not? it’s technically along the same lines?
-
Here’s an interesting topic.
The perpetrator of this crime used AI to fake an anti-sematic message which was then circulated widely and placed the victim of the impersonation (and family) at high risk of harm.
Scott Shellenberger, the Baltimore County state’s attorney, added: “We also need to take a broader look at how this technology can be used and abused to harm other people.”
Not the first, or the last.
-
This is extremely worrying
This quote alone literally gives me the creeps
Air force boss Frank Kendall was so impressed he said he’d trust it with deciding whether to launch weapons in war.
This is a seriously stupid statement in my view and there’s no way I’d ever trust something like AI to make decisions on using weapons or not.
It seems that this is a serious concern shared by others
Arms control experts and humanitarian groups are concerned that AI might one day be able to autonomously drop bombs without further human consultation and are seeking restrictions on its use.
Clearly, the last video in the first article on this thread should show how we should heed the warning in relation to the risks involved.
-
Ouchhh, bad bad bad
-
@DownPW yes, very. Shocking that the US military actually thinks this is a good idea. I mean, without going all “Hollywood” in terms of The Terminator franchise, there is a serious risk to human life here.
-
“I could see my face and hear my voice. But it was all very creepy, because I saw myself saying things that I never said,” the 21-year-old, who studies at the University of Pennsylvania, told the BBC.
https://www.bbc.co.uk/news/articles/c25rre8ww57o
Statistics disclosed by the public security department in 2023 show authorities arrested 515 individuals for “AI face swap” activities. Chinese courts have also handled cases in this area.
-
SCARLETT JOHANSSON’S STATEMENT IN FULL
Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system.
He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI.
He said he felt that my voice would be comforting to people.
After much consideration and for personal reasons, I declined the offer.
Nine months later, my friends, family and the general public all noted how much the newest system named “Sky” sounded like me.
When I heard the released demo, I was shocked, angered and in disbelief that Mr Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.
Mr Altman even insinuated that the similarity was intentional, tweeting a single word “her” - a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.
Two days before the ChatGPT 4.0 demo was released, Mr Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there.
As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr Altman and OpenAI, setting out what they had done and asking them to detail the exact process by which they created the “Sky” voice. Consequently, OpenAI reluctantly agreed to take down the “Sky” voice.
In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity.
I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.
-
Glue cheese to pizza and eat rocks?? Wow.
-
Big Brother is watching you. Literally, and making incorrect decisions in the process
-
Now here’s a use for AI would support.
-
This is bad for democracy, as the article indicates. Not only has the general election not even taken place in the UK, but to immediately provide the “winner” is problematic to say the least when nobody really knows how people will vote.
Sure, there are polls etc, but what really matters is the actual vote count.
https://news.sky.com/story/chatgpt-tells-users-labour-has-already-won-the-election-13148330
It sourced this particular answer to Wikipedia and an article by the New Statesman that analyses who will win the general election on 4 July.
And this is where it all falls apart. ChatGPT is clearly only as accurate as the resources it harvests - even if they are predictions.
-
Here we go again - META completely ignoring privacy to “train” it’s own AI
-
Ok, now this is creepy. Not only do these look realistic, but according to the article
“Psychological counselling and health are certainly future application scenarios. We are currently conducting research such as auxiliary treatment and preliminary screening for emotional and psychological disorders,” he says.
I personally think anyone suffering with mental health issues is likely to be pushed over the edge by this rather than assist with their recovery.
https://news.sky.com/story/hyper-realistic-humanoid-robots-could-be-used-in-psychotherapy-13151120
-
Seems even the Pope has something to say about AI…
-
Some useful tips here to help secure your personally identifiable information
-
This one is nothing short of amusing, but carries the same message
The decision comes amid concerns over the potential impact AI could have on jobs and the workplace.
https://news.sky.com/story/mcdonalds-ends-ai-drive-thru-trial-after-order-mishaps-13155091
-
@phenomlab Here’s an update to this specific article
https://news.sky.com/story/scarlett-johnasson-speaks-out-about-clash-with-openai-13158491
-
Now there’s a surprise - NOT
Supplementary opt-out information for EU and UK residents
How users in Europe and the UK can opt out
Users in the European Union and the UK, which are protected by strict data protection regimes, have the right to object to their data being scraped, so they can opt out more easily.
If you have a Facebook account:
-
Log in to your account. You can access the new privacy policy by following this link. At the very top of the page, you should see a box that says “Learn more about your right to object.” Click on that link, or here. Alternatively, you can click on your account icon at the top right-hand corner. Select “Settings and privacy” and then “Privacy center.” On the left-hand side you will see a drop-down menu labeled “How Meta uses information for generative AI models and features.” Click on that, and scroll down. Then click on “Right to object.”
-
Fill in the form with your information. The form requires you to explain how Meta’s data processing affects you. I was successful in my request by simply stating that I wished to exercise my right under data protection law to object to my personal data being processed. You will likely have to confirm your email address.
-
You should soon receive both an email and a notification on your Facebook account confirming if your request has been successful. I received mine a minute after submitting the request.
If you have an Instagram account:
-
Log in to your account. Go to your profile page, and click on the three lines at the top-right corner. Click on “Settings and privacy.”
-
Scroll down to the “More info and support” section, and click “About.” Then click on “Privacy policy.” At the very top of the page, you should see a box that says “Learn more about your right to object.” Click on that link, or here.
-
Repeat steps 2 and 3 as above.
Provided courtesy of https://www.technologyreview.com/2024/06/14/1093789/how-to-opt-out-of-meta-ai-training/
I have just submitted my request using the guidelines above. I’ll post here as soon as I receive a response
EDIT: I’ve now received a response from Facebook below
Hi ,
We’ve reviewed your request and will honor your objection. This means your request will be applied going forward.
If you want to learn more about generative AI, and our privacy work in this new space, please review the information we have in Privacy Center.
This inbox cannot accept incoming messages. If you send us a reply, it won’t be received.
Thanks,
Privacy OperationsBased on this, it appears that Facebook are open to exclusion. However, if they were not, I would have deleted my account.
-
Related Topics
-
-
-
-
-
Blog Setup
Solved Customisation -
-
-