bionic hand and human hand finger pointing

A.I Dilemma – March 9, 2023. Let’s talk about AI and AGI.

I am sure you all will remember me talking about the need to be actively involved in AI creation. Whether it be by making the right consumer choices, being informed on what is actually going on, or finding ways to be active on a political level so that our communities are protected. In all ways, this means having intelligent discourse with your family, friends, neighbors, and so on.

All of 2023 I have been deleting programs, and discontinuing subscriptions to things like voice-to-text apps like Descript, etc. However, as fast as I can work through all this, it seems to be popping up again and again to eventually lead to the latest computers including AGI/AI and we no longer have a choice anymore. I am not against AI per say, I am against it being forced on us, and against it being created without higher consciousness. That is why I have been opting out as much as I can.

Below you will see a great YouTube video called The A.I. Dilemma – March 9, 2023. (If you have not seen the Social Media Dilemma I recommend that you want it also.)Here is a synopsis of that video:

You may have heard about the arrival of GPT-4, OpenAI’s latest large language model (LLM) release. GPT-4 surpasses its predecessor in terms of reliability, creativity, and ability to process intricate instructions. It can handle more nuanced prompts compared to previous releases, and is multimodal, meaning it was trained on both images and text. We don’t yet understand its capabilities – yet it has already been deployed to the public.

At the Center for Humane Technology, we want to close the gap between what the world hears publicly about AI from splashy CEO presentations and what the people who are closest to the risks and harms inside AI labs are telling us. We translated their concerns into a cohesive story and presented the resulting slides to heads of institutions and major media organizations in New York, Washington DC, and San Francisco. The talk you’re about to hear is the culmination of that work, which is ongoing.

AI may help us achieve major advances like curing cancer or addressing climate change. But the point we’re making is: if our dystopia is bad enough, it won’t matter how good the utopia we want to create. We only get one shot, and we need to move at the speed of getting it right.

In the video at the 22-minute mark, they talk about the need for our private thoughts to be protected. I want to include this article because it shows that others are thinking about this very thing in December 2023. 

https://www.npr.org/2023/12/09/1218374512/europe-first-comprehensive-ai-rules?sc=18&f=1001


Here is a 60-minute video on Quantum computers, you can also look for Microsoft announcing this month that they are creating the hardware for personal computers that will increase the processing speed and include AI programs. 

https://www.cbsnews.com/news/quantum-computing-advances-60-minutes/


An here is the Center for Humane technology website https://www.humanetech.com/


I struggle with all of this, because everyone can see at this point it can not be stopped. AI, Robotics, and our day to day lives are converging. There is a race right now, as pointed out in the video, to be the one to control this, to make the most money from it. We are living in a time where great things are happening, but we need to take care and ensure we are going in the ‘greater good’ direction. All technology can be weaponized so we need to be a global society that raises our frequencies first, and create tach based on that. All of this seems to be conflicting to me, and the way it is being implemented is controlling our thoughts and goals already. It makes me want to unplug and go build a tree house in the woods to live in…. so if one day I just walk off into the woods leaving all my tech behind know that this want won me over.

I would love to hear your feed back, let me know if you have any links to add to this post too.

Leave a ReplyCancel reply