Don’t Be Afraid of AI | Industry Insights | All MKC Content | ANA

Don’t Be Afraid of AI

Share        

To put it bluntly, people are scared of AI, and it's our collective responsibility to try to understand this fear. Because, after all, as experience has suggested in the past, it is through the act of making sense of the unknown that we are able to move beyond our existential anxieties regarding it.

This is not to say that people's fears are not legitimate. The sentiments made by striking auto workers and actors fearing digital replacement and exploitation are echoed and encouraged by statements such as the one from the CEO of IBM saying that the company would replace nearly 8,000 jobs with AI.

But that's not the full picture.

Throughout all human history, there has been a constant refrain of technological innovation, existential anxiety, refinement and ultimately absorption into the larger culture writ large. The moving picture did not replace theater or the novel, nor did photography replace painting as an artistic medium. If anything, they inspired each respective medium to grow further into itself through inspired differentiation. The technology, once new and terrifying, ultimately became relegated to the status of "just another tool."

Despite what science fiction may assert, there is never going to be a machine that can duplicate human output or human creativity. While emerging AI technologies can and will continue to detect patterns and build databases of information, there exists within the confines of the human skull a product that is so at once complex and simple no amount of code could ever manage to predict. The extremely human response to base emotional stimuli like fear, love, jealousy, and failure — the things that have motivated almost every consequential human achievement, both positive and negative — are completely impenetrable to machine learning. Quite simply put, the ego is something that cannot be contained within a series of ones and zeroes.

Being able to pull from larger datasets has its advantages. For example, a recent breast cancer screening study has found that implemented AI was able to detect 20 percent more cancers than conventional procedures, without increasing the amount of false-positives. The AI was also able to reduce the radiologists' workload by 44 percent.

Like any new tool, there are safety precautions to consider, some which are more readily apparent than others. Issues of privacy become a concern. How are things like code, intellectual property, even identity protected from AI? What sorts of morality-adjacent safeguards can be put in place to ensure these programs are being used responsibly?

These are questions without easy answers, mainly because the technology is so new and developing at such a rapid rate. Seemingly every moment of every day sees some new development in encryption, access controls and algorithmic evolution ultimately reshaping the way we think about this emerging reality. And then, once the technological dilemmas of today are addressed, tomorrow will bring an entirely new quantity of unforeseeable problems to address.

In his 1996 book, A Year With Swollen Appendices, Brian Eno noted, "Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature. CD distortion, the jitteriness of digital video, the crap sound of 8-bit - all of these will be cherished and emulated as soon as they can be avoided." Perhaps the difficult, awkward, and annoying features of AI will in time become qualities we look back upon fondly as they are streamlined and eroded with each pass. The ugly and already irrelevant generative art with too many fingers certainly comes to mind as something to file away in a very particular time capsule.

The things we fear are tangible: a brief list includes but is not limited to political unrest, economic instability, physical illness and ultimately death itself. But we have other fears as well, that are more abstract and ephemeral: the unknown, the other, even ourselves. In its nascent state, artificial intelligence occupies a bit of both spaces, and how we respond is, as always, up to us.


The views and opinions expressed are solely those of the contributor and do not necessarily reflect the official position of the ANA or imply endorsement from the ANA.



Frank Lipari is executive creative director at Tappa.

Share