A Bit Creepy: A Machine-Created Lullaby Wants to Put You to Sleep Faster Than a Human

A Bit Creepy: A Machine-Created Lullaby Wants to Put You to Sleep Faster Than a Human
A healthcare company has produced two lullabyes–one composed by a human, the other by a machine, and they're calling on you to decide which is better. (CO0 / modified by Tom Ozimek)
Tom Ozimek
11/28/2017
Updated:
12/5/2017

A healthcare company is trying its hand at playing the human versus artificial intelligence (AI) contest—and wants us to play along.

Health services company AXA PPP Healthcare commissioned two pieces of music—one composed by a human and another by AI—and has challenged the public to decide which makes you more sleepy, reports the Mirror.
The first tune is by renowned composer Eddie McGuire, while the other is the product of a machine that used artificial neural networks and didn’t get any help from a human.

McGuire calls his composition (above) “Lyrical Lullaby.” It was created in conjunction with Bede Williams, head of Instrumental Studies at the University of St. Andrews.

“Lots of people report of a falling sensation as they fall asleep, and many lullabies mimic this by containing melodies made up of descending patterns in the notes. Lyrical Lullaby has this essential feature and many other musical devices which can induce in us a state of restfulness,” Williams told the Mirror.

In order to come up with “Lullaby,” an AI-capable machine was taught to compose using sheet music in computer-readable format.

“An artificial neural network is essentially a representation of the neurons and synapses in the human brain—and, like the brain, if you show one of these networks lots of complex data, it does a great job of finding hidden patterns in that data,” said Ed Newton-Rex, creator of the machine that produced the composition, according to the Mirror.

“We showed our networks a large body of sheet music, and, through training, it reached the point where it could take a short sequence of notes as input and predict which notes were likely to follow.

“Once a network has this ability, it essentially has the ability to compose a new piece, as it can choose notes to follow others it’s already composed.”

The company behind the stunt is AXA PPP healthcare, part of the AXA Group.

The gimmick seems harmless enough compared to recent bold strides in the area of machine-human relations, such as granting citizenship to a robot. The Mirror reported recently that a humanoid robot created by a company in Asia was granted citizenship of Saudi Arabia at a special ceremony in October—giving the machine more rights in the country than women.
Citizen Sophia. (Flickr/AI for GOOD Global Summit, CC BY)
Citizen Sophia. (Flickr/AI for GOOD Global Summit, CC BY)
And one need not strain to hear alarm bells ringing. Elon Musk’s now legendary campaign cautioning against producing excessively intelligent machines has famously compared work on AI to “summoning the demon.”

“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees at the National Governors Association summer meeting on Jul. 15, 2017. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”

Elon Musk, co-founder and CEO of Tesla Motors, speaks at the 2015 Automotive News World Congress in Detroit, Michigan, Jan. 13, 2015. (Bill Pugliano/Getty Images)
Elon Musk, co-founder and CEO of Tesla Motors, speaks at the 2015 Automotive News World Congress in Detroit, Michigan, Jan. 13, 2015. (Bill Pugliano/Getty Images)
Less dramatic but equally salient remarks were recently made by John Giannandrea—who leads AI at Google—to MIT Technology Review.
Google's Senior VP of Engineering John Giannandrea speaks onstage during TechCrunch Disrupt SF 2017 at Pier 48 in San Francisco, California, on Sept. 19, 2017. (Steve Jennings/Getty Images for TechCrunch)
Google's Senior VP of Engineering John Giannandrea speaks onstage during TechCrunch Disrupt SF 2017 at Pier 48 in San Francisco, California, on Sept. 19, 2017. (Steve Jennings/Getty Images for TechCrunch)

“The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased,” Giannandrea told MIT Review.

Giannandrea worries that as cloud-based AI becomes more accessible, it will make it easier for bias to creep in. With people in the driver’s seat without the technical knowledge or the ability to assess underlying data and algorithms for quality and bias, machine intelligence could actually increase the incidence of bad choices.

“If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it,” Giannandrea said.

And as intelligent technologies become more advanced, they become more mysterious, which is a problem if they are to be relied on to run our everyday lives. In an MIT Technology Review article titled “The Dark Secret at the Heart of AI”, a compelling case is made to consider restricting development—or at least deployment—of AI to a level that does not go beyond the human ability to understand it.

“We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Carlos Guestrin, a professor at the University of Washington. “We’re a long way from having truly interpretable AI.”

What “interpretable” means in the context of AI is simply that an explanation of how it works can be provided in a way that is rational and understandable to humans. However, cutting-edge machine learning, or deep learning, is heading in the opposite direction—towards greater complexity.

To some, what makes the promise of machine intelligence so enticing is that it can do what humans can’t. But that’s also where the danger lies because it can only accomplish this objective by becoming too complex for humans to grasp.

Tufts University to meet with Daniel Dennett, a philosopher and cognitive scientist from Tufts University told MIT Review, “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” Dennett says.

Otherwise, the machines may become too smart for our own good.

“If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”

And a final message from your friendly neighborhood news reporter …
Please help support independent journalism by sharing this article far and wide on social media. Thanks!
Also, a special thank you–the compositions referenced in this material have been provided by the AXA PPP healthcare sleep centerhttps://www.axappphealthcare.co.uk/health-information/sleep/
Tom Ozimek is a senior reporter for The Epoch Times. He has a broad background in journalism, deposit insurance, marketing and communications, and adult education.
twitter
Related Topics