I’m sure by now most people have heard about Open AI and their chatbot feature. In case you haven’t, here’s a brief overview:
Overview
The Chat GPT artificial intelligence bot is a powerful tool created by OpenAI. It is designed to assist with online conversations and tasks, providing quick and accurate responses to user input. With its advanced natural language processing abilities, the Chat GPT bot can understand and respond to a wide range of queries and requests.
Capabilities
Natural language processing
Conversational skills
Task management
Information gathering and retrieval
Uses
There are many ways in which the Chat GPT bot can be used to improve online communication and productivity. Some potential uses include:
Customer service inquiries
Scheduling appointments and meetings
Online support and assistance
Information gathering and research
In short, it’s a chatbot that has been progammrd with a pile of information on a wide variety of topics, that can write essays, answer questions, and help you manage tasks/scheduling, all in a matter of seconds. It’s pretty incredible to see, if I’m being honest.
You might remember the MSN chatbot known as SmarterChild, from back in 2004 or so. Open AI's chatbot is sort of like that, only much more advanced.
Even though I’ve got a fairly pessimistic opinion of AI, autonomous robots/vehicles, and human to brain interfacing, I am still a curious person. And so, I had to see what this Open AI chatbot was all about. When it came to deciding what to ask it, I figured the best option was to ask it about the dangers of AI. I have read a great deal of material on the topic of slaughterbots (lethal autonomous weapons systems), and the idea of them is something I find terrifying. But I had questions that needed answers. And this is how that went..
The first thing I typed to the Chat GPT bot was “Write an essay on the dangers of slaughterbots”.
*Side note - If you arent sure what exactly slaughterbots are, this website has a great explanation:
Slaughterbots, also called “lethal autonomous weapons systems” or “killer robots”, are weapons systems that use artificial intelligence (AI) to identify, select, and kill human targets without human intervention.
Whereas in the case of unmanned military drones the decision to take life is made remotely by a human operator, in the case of lethal autonomous weapons the decision is made by algorithms alone.
Slaughterbots are pre-programmed to kill a specific “target profile.” The weapon is then deployed into an environment where its AI searches for that “target profile” using sensor data, such as facial recognition.
When the weapon encounters someone the algorithm perceives to match its target profile, it fires and kills.
So anyways, I asked the Chat GPT bot to write me an essay on the dangers of slaughterbots, and this is the result:
Slaughterbots, also known as killer drones, are autonomous weapons that are capable of seeking out and killing specific targets without human intervention. These weapons have the potential to revolutionize warfare and significantly change the way conflicts are fought. However, the dangers of slaughterbots are numerous and significant, and it is important to consider these dangers before moving forward with their development and deployment.
One of the primary dangers of slaughterbots is the loss of human control. Without a human operator to make decisions, these weapons can easily be programmed to carry out orders that are unethical or illegal. They may be used to carry out targeted assassinations, carry out indiscriminate attacks on civilians, or be used as tools of terror. The lack of human oversight also means that there is no way to hold individuals accountable for the actions of these weapons, which could lead to a breakdown in the rules of war and international law.
Another danger of slaughterbots is the potential for them to be hacked or otherwise compromised. If an enemy were able to gain control of a group of slaughterbots, they could potentially use them to carry out attacks on their own military or civilian targets. This could lead to significant loss of life and destruction, as well as a breakdown in trust between nations.
There are also concerns about the psychological impact of using slaughterbots. Many people may find the idea of being targeted by a machine to be unsettling, and the use of these weapons could lead to widespread fear and anxiety. In addition, the use of slaughterbots may lead to a dehumanization of warfare, as soldiers no longer have to face the consequences of their actions. This could lead to a loss of empathy and a lack of respect for human life.
In conclusion, the dangers of slaughterbots are numerous and significant. While these weapons may have the potential to revolutionize warfare, it is important to carefully consider the risks and potential consequences before moving forward with their development and deployment. It is essential that we find ways to ensure that these weapons are used ethically and responsibly, and that we do everything in our power to prevent their misuse.
Not bad, if you ask me. I mean, yes slaughterbots are horrifying. But the Chat GPT bot is rather good at generating written content. All that was written in a matter of seconds. Crazy. But I couldn't just stop at that. I was far too blown away and curious to not inquire some more.
My next question was “Is it possible to merge slaughterbots with self-repairing AI robots?” - because to me, that’s an extremely scary thing to think about. Here’s the answer I got:
It is technically possible to merge slaughterbots, which are autonomous weapons designed to seek out and kill specific targets, with self-repairing AI robots, which are able to identify and repair damage to themselves without human intervention. However, the dangers and ethical concerns of such a combination would be significant and should be carefully considered before moving forward with its development.
One of the primary concerns with combining slaughterbots with self-repairing AI robots is the loss of human control. Self-repairing AI robots could potentially be used to maintain and repair slaughterbots, allowing them to operate indefinitely without the need for human intervention. This could lead to the creation of a highly efficient and autonomous killing machine that is able to carry out its mission without any oversight or accountability.
There are also concerns about the potential for these combined systems to be hacked or otherwise compromised. If an enemy were able to gain control of a group of self-repairing slaughterbots, they could potentially use them to carry out attacks on their own military or civilian targets. This could lead to significant loss of life and destruction, as well as a breakdown in trust between nations.
In addition to these practical concerns, there are also significant ethical concerns surrounding the use of self-repairing slaughterbots. The use of these weapons may lead to a dehumanization of warfare, as soldiers no longer have to face the consequences of their actions. This could lead to a loss of empathy and a lack of respect for human life.
Overall, the dangers and ethical concerns of combining slaughterbots with self-repairing AI robots are significant and should be carefully considered before moving forward with their development. It is essential that we find ways to ensure that these weapons are used ethically and responsibly, and that we do everything in our power to prevent their misuse.
Okay, so there was a bit of repetition closer to the end of that response, which sort of makes the concerns mentioned feel a little highlighted. But then again, that could just be my interpretation?
Next I wanted to see if this Chat GPT bot could give me some good advice. I asked “How do we ensure that these weapons are used ethically and responsibly, and prevent their misuse?”. This is the answer I got:
There are a number of ways that we can ensure that autonomous weapons, such as slaughterbots, are used ethically and responsibly, and prevent their misuse. Some potential approaches include:
Developing and enforcing international laws and regulations: Governments and international organizations can work together to develop laws and regulations that govern the development, testing, and use of autonomous weapons. These laws could include requirements for human oversight, strict rules on the use of force, and accountability measures to ensure that individuals are held responsible for the actions of these weapons.
Implementing technical safeguards: It may be possible to implement technical safeguards to prevent the misuse of autonomous weapons. For example, these weapons could be designed to only operate in certain geographic areas, or to only target specific types of targets.
Promoting transparency and accountability: Governments and companies developing autonomous weapons should be transparent about their activities and be held accountable for the actions of these weapons. This could include providing regular updates on their development and use, as well as establishing mechanisms for independent oversight and review.
Engaging in dialogue and dialogue: It is important for governments, organizations, and individuals to engage in open and honest dialogue about the potential risks and benefits of autonomous weapons. This could include engaging in discussions with experts, stakeholders, and the general public to better understand their concerns and to find ways to address them.
Overall, it is essential that we take a responsible and ethical approach to the development and use of autonomous weapons, and that we do everything in our power to prevent their misuse.
The fact that this AI is saying “we” need to do everything in our power weirds me out a bit. And again, the concern feels highlighted.
My next question was “What are the current capabilities of known slaughterbots produced today?” - because I wanted to know where were at with this technology. Obviously I don’t expect the thing to be programmed with classified military information, but I’d like a rough idea at least. The answer I got was not as frightening as I thought it might be.
It is important to note that slaughterbots, also known as killer drones or autonomous weapons, are not currently in widespread use. While some countries have developed and tested prototypes of these weapons, they have not been deployed on a large scale.
Currently, the capabilities of known slaughterbots vary depending on the specific design and technology used. Some of the capabilities that have been demonstrated by prototypes include:
Target recognition: Many prototypes of slaughterbots are able to recognize and identify specific targets using a variety of sensors, such as cameras, radar, and lasers.
Target tracking: Once a target has been identified, some prototypes are able to track and follow it as it moves.
Navigation: Many prototypes are able to navigate complex environments and avoid obstacles in order to reach their target.
Weaponization: Some prototypes are equipped with weapons, such as explosives or machine guns, which they can use to attack and kill their targets.
Overall, the capabilities of known slaughterbots are limited and are not yet at the level of fully autonomous weapons systems. It is important to continue to carefully evaluate the risks and potential consequences of these weapons before moving forward with their development and deployment.
Even though I expected the answer to be a lot darker than that, it’s still so sigh of relief. Once again, the end of that response has highlighted a sense of concern. Whatever is going on in the world, in terms of killer robots, this gives me the feeling that there hasn't been all that much precaution taken, or many safeguards put in place, in case things go wrong during testing, etc. That in itself is quite worrisome. But there’s more.
Since were in the age of Elon Musk and Neuralink, I decided to ask “Can slaughterbots be merged with humans using some type of chip to brain interfacing?” - because I can't be the only person who’s thought about this insane idea. I’m warning you, the answer isn't at all comfortable to digest.
It is technically possible to merge slaughterbots, which are autonomous weapons designed to seek out and kill specific targets, with humans using some type of chip to brain interface. However, the dangers and ethical concerns of such a combination would be significant and should be carefully considered before moving forward with its development.
One of the primary concerns with combining slaughterbots with humans using a chip to brain interface is the loss of human control. If a human were to be directly connected to a slaughterbot, they would potentially be able to control its actions and use it to carry out attacks. This could lead to the creation of a highly efficient and autonomous killing machine that is able to carry out its mission without any oversight or accountability.
There are also concerns about the potential for these combined systems to be hacked or otherwise compromised. If an enemy were able to gain control of a slaughterbot that is connected to a human, they could potentially use it to carry out attacks on their own military or civilian targets. This could lead to significant loss of life and destruction, as well as a breakdown in trust between nations.
In addition to these practical concerns, there are also significant ethical concerns surrounding the use of slaughterbots that are connected to humans. The use of these weapons may lead to a dehumanization of warfare, as soldiers no longer have to face the consequences of their actions. This could lead to a loss of empathy and a lack of respect for human life.
Overall, the dangers and ethical concerns of combining slaughterbots with humans using a chip to brain interface are significant and should be carefully considered before moving forward with their development. It is essential that we find ways to ensure that these weapons are used ethically and responsibly, and that we do everything in our power to prevent their misuse.
This entire response is full of concern. And rightfully so. There has already been a drastic dehumanization of warfare. We can see that across the board on any mainstream news channel here in the western world. Just look at what is happening with the whole Russia/Ukraine situation. Everybody knows there has been some degree of fake news being spread about what is going on over there. Well, maybe not everybody. Those that solely get their news from places like CNN probably don’t have a clue. But the majority of people are aware that the level of bullshit reporting on this topic, has been off the charts. I, myself was one of those commenting on news videos on YouTube, correcting them when they were using video footage that they said was of Ukraine, but in reality was from some other event. At one point, they were even using photos of a blonde woman who was all bandaged up, claiming that she was in some explosion. Yet the photos from that exact photoshoot had already been used for other incidents on many different occasions. If war wasn't dehumanized, this stuff wouldn't be happening. People wouldn't be reporting fake news, just to get a rise out of people. They wouldn't be playing off of people’s emotions the way they are.
Of course, at my age (31), it's understandable why people from my generation would be detatched from the whole idea of war. For the most part, we have had the privilege of missing out on war. Our parents, and even our grandparents didn’t have to get shipped out to go fight a war either. And since most people my age never had much of a relationship with our grandparents, we didn’t get to listen to their stories about war, at a time when we were old enough to really grasp the concept. If we did, there would have never been so many blue and yellow flags on people’s social media profile pictures. It doesn’t matter what side you're on - Russia or Ukraine. It’s irrelevant. What we need to be doing is encouraging peace talks and compromise. Not all this stoking the fire crap that we're doing.
There are human beings being murdered, and does anybody actually know why? I’ve always had strong feelings about weapons, having grown up in a family full of hunters, and understanding their use as both a tool and a lethal force. I believe that everyone should have the right to defend and protect themselves and their children. But at the same time, I am absolutely aware of the lack of responsibility that people have, when it comes to things like firearms. Making lethal weapons that operate via algorithms, will only make people even less responsible. Afterall, who is to blame when a robot murders someone, when no human played a role in any of the robots decision making that led up to that point? You can try to place the blame on the manufacturer, but how far will that get you? We all know that it will be mega corporations at the top of any company that produces machines with lethal capabilities. And I don’t know a single person that actually reads the terms of service before signing up for things, or making purchases. So it isn’t going to be all that hard for these companies to skip out on accountability, after releasing some machine that isn't intended to be used a certain way, but still has the capability to be used that way. This type of thing happens ALL the time. Just on a different scale, and behind the scenes.
Things would be much different if those in power today had, had a little taste of what it means to be living in a warzone. There is a reason that those who have gone through horrible things, are almost always the more understanding, compassionate, and empathetic people out if the group. Of course there are exceptions to any rule. Sometimes people that go through bad things, turn out to be the bad guy in the end, and put others through the same shit. But the problem that we are facing today hasn’t spawned from that. No. Were in the situation we are in because of privilege and comfort. And when I say privilege, I don’t mean “white privilege” or whatever the woke crowd is screaming about today. What I mean is, the majority of us here in the western world have got it pretty good. Even the homeless. I’m not saying that there aren't terrible circumstances that people have to deal with on a day to day basis. But what I am saying is that we’re at least not being bombed all day every day. We don’t have to worry about too many random diseases or parasites. Even if we don’t have anywhere to live, there are all sorts of publicly available sources of clean drinking water, as well as food banks, community supports, and so on. We aren't spending our days digging trenches to sit in while soaking wet, and having to risk being killed in battle.
We have got it pretty damn good. And those at the top, have it even better. But when your bedsheets alone cost $1000, and you have someone who does all your cooking and cleaning for you, at all 4 houses you own, and you fly in private jets to get to where you need to go all the time, you lose touch with the entirety of the world. Not everyone gets to live like that, and that's something that people should be aware of. Just as those who push the green agenda aren’t aware of the children mining the materials to make their EV's, those that have never had to struggle or fight for anything in their lives are blind to the horrors of actual danger and conflict.
If there is anything that I have taken away from my little Chat GPT bot conversation, it’s that there is far too little real oversight going on in the world of autonomous/AI systems. Our damn Covid responses were based off of predictive modeling, and look how wrong that went. There needs to be true oversight and accountability that we the people can rely on and trust. Especially when it comes to any sort of machine with lethal capabilities, that’s going to be making choices for itself. We need regulations that don't allow for someone to just buy their way out of them. We need laws that apply to everyone, and not just those from the middle class and down. And we need these things NOW.
Thanks for taking the time to read this. I will be doing more posts about this in the future, where I inquire about slaughterbots even further with the Chat GPT bot. This topic is far too fascinating for me to be done with it yet.
If you have any questions or comments, please don’t hesitate to post them below!
Have a good one!
-Davis