Technology fosters dependency | The Triangle

Technology fosters dependency

It is sufficient to say that technology has substantially contributed to the advancement of human civilization. A remarkable observation is how science and technology work closely in tandem: applying basic scientific principles can yield highly sophisticated devices, which in turn may lead to major scientific breakthroughs.

This positive feedback loop results in technological progress, which rewards countries with ample economic and social benefits.

Sadly, technology also has negative consequences. The obvious pitfalls are deadlier wars and smarter fraud schemes. Worse yet, technology can foster a culture of dependency at the expense of efficiency.

Smartphones and tablets simplify our lives by providing us with virtually an unlimited number of apps. The versatility of smartphones has tragically rendered simpler technology, such as cameras, flash drives and calculators, obsolete.

Such a technological renewal seems surprising at first glance, considering smartphones and tablets have only been around for a few years. Ryan Avent stated in The Economist Oct. 4, two leading factors drove the Digital Revolution: better processing power and more efficient algorithms.

From Moore’s Law, the number of transistors that could be added onto an integrated circuit doubles about every two years. The resulting exponential growth leads to cheaper and faster digital devices.

However, as Avent warns, shrinking and cramming more transistors into a microchip will likely increase costs. Instead, the key solution is devising algorithms that are more efficient.

Now, we can justify our love for smartphones by thanking better algorithms. However, the Information Age that awards us leisure damages the labor market. Numerous jobs are sacrificed to create a more efficient society, and the gap between skilled and unskilled workers is worsening. These problems aren’t new; in fact, they occurred during the two Industrial Revolutions in the 18th and 19th centuries.

Avent suggests that society is usually hesitant about such technological revolutions. Moreover, he asserts that the manufacturing sectors are heavily becoming automated in developing countries, which effectively prevents unskilled workers from rural areas from acquiring work.

Admittedly, technological revolutions emphasize the problems of market failure, and we are tempted to suspect technology as the culprit. Such criticism is unfair. Of course, firms will want to replace unskilled workers with electronic machinery to minimize production costs and maximize output.

That doesn’t necessarily imply that unskilled workers will remain permanently unskilled. The gap between skilled and unskilled workers is a problem of human capital. Rather than dismissing workers, firms can invest in short-term training of unskilled workers in order to increase productivity in the long term (due to better overall worker quality). Education and political stability in developing countries can ease the transition of rural workers into the modern digital world.

A popular dystopian fear employed in science-fiction novels pertains to robots replacing humans entirely in the labor force due to their superior manual skills. Now this fear may seem like nonsense at first glance, but it deals with a problem of artificial intelligence called “superintelligence.”

In a recent NPR commentary, Adam Frank refers to the technological renewal problem mentioned previously, namely the production of faster and cheaper technology. The concern is the possibility that we will create machines that are smarter than we are, and these machines will produce even smarter machines.

This leads to a cascade effect called technological singularity. What Frank fears is that humans cannot imagine what superintelligent machines are capable of, and this can lead to the destruction of human civilization.

When I read Frank’s commentary, I had a chill run down my spine. I would like to believe that superintelligence will not occur until several centuries from now, but with the current accelerating technological development, I’m not so sure. It is actually great news that Moore’s Law is becoming outdated, since we can slow down this technological pace.

Yet, with more efficient algorithms and a future invention of something better than an integrated circuit (and a growth even scarier than Moore’s Law), it is hard to foresee this pace slowing down.

Fortunately, I came across another recent NPR commentary written by Alva Noe that dismissed my paranoia. The central question lies on what intelligence really means. Noe asserts that intelligent entities will utilize the environment independently for their own benefit. Supercomputers are programmed by humans so they aren’t independent.

Something primitive like amoebas, as Noe illustrates, can acquire nutrients by themselves. Decomposing basic biological principles into physical laws that can be programmed is quite a large obstacle to overcome, much less creating sentient and rational machines. As such, Noe argues that superintelligence won’t be achieved anytime soon.

It is tempting to accept Noe’s argument against superintelligence with open arms, but technological advancements are always a concern. We are living in a society where driverless cars and aerial drones are becoming popular.

Advances in nanomedicine and biomedical engineering have provided essential treatments to once incurable diseases and have improved the overall quality of life for patients. In a way, medicine is transforming patients into cyborgs, while awarding them longer and healthier lives.

Self-replicating machines, which were once elegant thought experiments of theoretical physicists, are slowly becoming reality. The objective behind self-replicating machines is to produce self-replicating spacecraft in order to advance space research.

Why stop there? Of course, we can ideally continue all the way to produce sentient and rational machines … if the government and the society allow us.

What is critical here is that technology isn’t responsible for our future downfall. We are. We are releasing our inner curiosities unchained for the sake of some future advancement to the society.

Advancing technology is fine if there is a specific purpose that benefits the society, such as curing diseases, improving the quality of life, enhancing travel or exploring space. With the ongoing problems of terrorism and the current Ebola outbreak, our technology is imperfect.

Even when our technology becomes advanced to the point where we can solve any problem instantly, technological diffusion is necessary to ensure the entire world is well advanced. After that, we can worry about changing human civilization by creating sentient and rational machines.

Badri Karthikeyan is a senior at Drexel University. He can be contacted at [email protected].