Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Monday, June 6, 2011

Critical Thinking Blog Post # 9

This course has shown me that Science and the Humanities are intimately connected. The scientific advancements we receive assist (and sometimes hinder) our continued existence, and our needs and concerns often prompt science into action. It is a a cyclical relationship that is not always portrayed kindly. If crops are dying out, science has the potential to strengthen them through modifications, which can be seen as unethical mingling in nature. The question then becomes, is it better to starve or to accept a deviance of natural order?
I found that my original ideas on science and humanities have been strengthened to an almost bleak position. Let me explain. I have wrestled with many questions during this course, question I have tried to answer in one semester, questions that have in one way or another entered my mind in the last decade through my exposure to film and books. Is cloning wrong? Is AI a threat to mankind? What is it to be alive? If there are three of me, which one am I, and does it matter? Is saving my own life over another an abhorrent act of selfishness? I feel now that I have acquired at least the first parts to my answers for all of these questions, though I am not exactly happy to hear them.
Cloning can help millions. It can save lives and improve the way we live. It can also scare the crap out of us, especially if we breed life, not just body parts. I like the idea of having a spare heart growing somewhere that can be used in case I ever fall victim to a stroke. I don’t like the idea of another me being sacrificed to retrieve said heart. Ethics does play an important role in decided whether or not to advance this field, but I feel this is an easier task then that of AI.
AI is a threat, because humanity will make it so through fear and feelings of incompetence. The robots will be better than us. That much is clear. What isn’t clear is how much we’re going to hate them for it. Like a parent that secretly wishes for his son to trip on the last leg of the race he never won in his youth. The biggest problem here is that robotics and AI are inevitable. I think the partial reason for this is that the science of 11111’s and 00000’s is led by consumers. A program that responds to your commands as quickly as you can think them will sell. It will be included in the new iPhone, the new personal computer, the new watch… and so on. People want this. They just don’t want what will come of it, in the same way people want the newest version of their phone and ditch their old model which ends up on the shores of a far off country corrupting the soil. We don’t like the thought of a five year old picking through motherboards for precious metals, but we looooooooooove our new iPad. I don’t see this problem being fixed, so I’ll be one of the first people in line to get injected with nanobots that will turn me into a walking computer. Not because I like the idea, but because it is inevitable. :-/

It's going to happen. Just saying.
As for the question of altruism, I am still wrestling with that, but I feel that rational self preservation is not a bad thing. The main problem with that argument is how can one be rational at a time of crisis where immediate action is needed and duress is pressed on the mind? Rationality is not generally seen as am immediate actions, which means my argument would have to be centered on an unconscious ‘rational’ aspect that is present in our minds and works as a potential survival instinct. I’ll let you know when I finish reading the 90 paged paper I have found on the topic.

Dammit if this doesnt make at least a little sense.
I’m glad to have used this blog as a conduit to express my thoughts and share my findings. Although I have experienced some difficulties keep up with the actual finished post, I have spent a great deal of time compiling my notes and refining what it was I wished to say on here. I can only hope this came through in my writing. I plan to continue this discourse into technologies and the promises and perils it can bring for as long as it is a relevant topic to me, and seeing as how I consider myself both a science and humanities person, I expect this to be a very long time.

Critical Thinking Blog Post # 6

For my research paper I will be writing about Philip K. Dick’s prospect of a future filled with androids who are almost indistinguishable from human beings. I will explore the merits of this cautionary tale and find an argument as to whether advancements in technology pose a threat to humanity and whether or not it should slow down or cease altogether.
I have two sources which will serve to help this discussion. The first is an article written by Ray Kurzweil for Newsweek in 2005. Mr. Kurzweil, an inventor and proponent of our continued advancement of technologies, gives an incredibly positive outlook on humanity’s future. He cites that artificial intelligence will match the computing power of the human brain by 2030 and is poised to surpass human capacity thereafter. This article, found on Lexis Nexis, also discusses the continued research being done on nanotechnologies. He feels that these tiny robots will have the ability to fight cancers, unclog arteries and work seamlessly with our body’s own natural defenses, thus introducing a stronger, healthier version of humanity. While messing with genetics and advancing intelligence that is foreign to the human mind might seem a bit terrifying to some, Kurzweil sees this as an opportunity for humanity to jump onto a new paradigm shift of existing on this world. I feel his attitude, although arguably over optimistic, has a quality that is often ignored in this world. Most often, I have witnessed either fear or ignorance when announcements have come of fantastic new breakthroughs in sciences once thought impossible to meddle in.
Another source I have found helpful is the book We, Robot by Mark Stephen Meadows. In this book, Mr. Meadows shows how advancements in robotics have assisted many people for decades, from replacing lost limbs to creating military robots that disarm bombs and provide soldiers with reserve rations and ammunition. In slight contrast to Mr. Kurzweil, he does go on to mention possible dangers with advancing these technologies too far, namely that if we are to create a race of robots to serve and help us, and we willingly upgrade these servants with AI capable of facilitating their functions by allowing them the use of what might be considered ‘human’ attributes such as emotion, self awareness and a desire to help, then we might find ourselves in the mix of slavery once again. The only thing that would be needed to spark a standoff would be a robot receiving an order and replying with a stern ‘No’. Mr. Meadows warns that to avoid what will be an undesirable conflict, we must define the limits to our advancement and ensure these limits are enforced.
Cloning, AI, eugenics are all too often labeled terrifying prospects, but the good that can come from understanding these must be examined in order to find a coherent answer to the question “How are far is too far in race for technology?”