[Day 1] AI and philosophy (3 min read)


Why philosophy matters? Is it better to learn science, to learn practical skills than just discuss about what seems to be good for nothing? To me, learning science is learning how to make a nuke. Learning philosophy is learning how to (or not to) use it. Science is a horse running wildly, and only philosophy has the power to control it.

Science is a horse running wildly, and only philosophy has the power to control it.

As a computer science major (common terms: geek, freak), I’m extremely interested in the rapid development in Artificial Intelligence(AI) field in the last few years. AI is preparing to conquer the world in the next decade. The list of what AI can do now keep expanding, from self-driving car to Facebook chatbot, from speech-to-text recognition to tools that can detect a particular online behavior. However, as the AI technology keeps upgrading, the needs for learning and discussing philosophy becomes relevant more than ever.

Let’s consider self-driving car development.  The few first levels in developing a self-driving car are creating abilities to control the internal factors like engines, oil levels, the wheels… Then,the car must be programmed to understand external factors, how to differentiate between  the road and the pavement, a human and a dog so it can function safely. They’re all technical work. However, the ultimate questions that a self-driving car must fulfill are philosophical ones. If unexpected things happened that force the car to choose between its owner and other pedestrians, what will it choose? What will be the first priority of the car? It is easy to say that is to protect human lives, but what are human lives?Can the car choose to sacrifice its owner arm/leg in order to save a family crossing the road carelessly? If the car choose to save its owner from being hurt with the cost of other’s people lives, who is going to be responsible?


  The trolley dillema

Even if we have created a perfect car that can act like a perfect human, it is still not a human. And what is the perfect human? Should he have perfect goodness? What is the perfect goodness? Who has the authority to judge what is good and what is not? In the discussion with Euthyphro, Socrates has pointed out that even the most holiest person of the time cannot explain clearly what define goodness. Computer science is not made to answer these questions. They are within the field of philosophy.

AI, in a way, is our newest, strongest tools to conquer the world. AI is the iron of our age. Iron can be formed to household tools but can also be turned into destruction items. By questioning the nature of everything, including human lives, philosophy gives us a chance to examine every premises we made, and help guiding us to wiser decision in the future. Philosophy is what keeps us different from AI. AI can do the work, but we are the one who are in control. The more powerful the tool is, the more wiser the owner must be. Without educating and encouraging philosophy within the society, we will eventually create a generation of destroyers. And yeah, that would be the end.


Plato. Euthyphro. Indiana University, 2010. 


One thought on “[Day 1] AI and philosophy (3 min read)

  1. Pingback: [Day 9] Talk with professor – Dr. Rosental (part 1) – The differences in philosophers through time and impact of philosophers to the world (10 min read) | Phuc Le

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s