Drawing an imaginary person’s picture is easy. Since we were kids, we have been doing it, idly sketching random doodles of people that only exist in our minds. However, computers don’t find it that easy. Even if trained with millions of human head images, their attempts to create a face that looks realistic have always been characterized by bizarre asymmetry, surreal contexts and unusual protuberances. However, last week, a group of researchers from the computer graphics firm Nvidia unveiled the results of their latest experiment: a series of machine-created photo-realistic images that could be easily mistaken for humans. The set of images posed a number of questions: what could be their inherent value, what other uses could be put to the technology, and whether machines that demonstrate such independence and creativity pose a wider existential threat.
Generative network of opponents
Nvidia’s lifelike results were due to the use of what is called an adversarial generative network, or Gan, which pits two computer networks against each other in a series of true-or-false rapid-fire tests. One network creates images that are random. The other analyzes these images, compares them to a huge real database, and tells the other computer how well they are doing. The former network will be better generated over time, the latter will be better classified. In Nvidia’s experiment, the job was done when the latter network considered a computer-generated face as alive as any in its database.
Gans ‘ apparent artistic flair developed rapidly late. His best attempts were blurry black-and-white images four years ago. Today, they are able to design dental crowns, build elaborate 3D environments for use in computer games, refine outer space telescopic images, and transform pictures of horses into zebras images. Back in October, Edmond de Belamy, an artwork created by a Gan, sold at the auction for US$ 432,500 (Dh1.6 million), more than 40 times the estimate. But Seattle artist Mike Tyka is skeptical of the artistic value of the work, working with artificial intelligence. “For me, training an algorithm on a set of existing artworks such as these is the most boring use of this technology,” he says. “But that’s the −operator’s choice. You can set the goal to anything you want and determine the training data, all of which will determine the outcome.”
‘That’s not smart’
Much of this research — including the recent experiment with Nvidia — seems to be aimed at creating images that fool the human brain, perhaps to highlight the idea of powerful artificial intelligence. Gans were built to process photographs and produce a version that looks remarkably like a Van Gogh or a Monet, but Peter Hall, professor of computer science at the University of Bath, doesn’t think it’s the same as the Gan who has the skill of a painter. “This technique can yield incredibly impressive results,” he says. “But Gans ‘ search tables are just very large, complex. They replace it with ‘ That ‘ if they see ‘ This. ‘ In fact, all they do is take a photo and trace above the top — and that’s not smart.
Even the least accomplished human drawings show more visual understanding than a Gan, Hall says. “I’m fascinated by the idea that a signal can go through our eyes into our brain, come out in a different way through our hands, and still be recognizable as the same thing,” he says. “Children may draw their parents, and even though it’s little more than a scribble, we can recognize them as their parents. That’s understanding visually.” Tyka agrees: “The drawing of a child is still a much more abstract transformation of experience than a modern network of machine learning can do,” he says. “But our own creativity is also based on our own experiences-our’ training data’-and the vast majority, if not all, of creative human work is somewhat derivative.”
DeepMind: the artificial intelligence wing of Google
While artists like Tyka use Gans to explore new forms of expression, the big tech companies are looking for practical uses that generate revenue. The parent company of Google, Alphabet, has also been pursuing the creation of authentic-looking images in recent months. His artificial intelligence wing, DeepMind, revealed in October Gan-generated pictures of a dog, a landscape, a butterfly, and a cheeseburger, all looking remarkably close to the real thing. According to Hall, this ability to accurately approximate images has potential in the graphics industry. “Some companies will want machines to perform routine design tasks like cleaning up noise, adding some hair, changing car wheel reflection, and so on,” he says. “But the way this technology is deployed in the longer term depends on what bosses want, what the economy is doing, and what politicians are doing.”
Originally published at https://planetstoryline.com on April 23, 2019.