- Artificial intelligence can now recognise and respond to emotions
- New voice imitation algorithm can mimic speech and even add emotion
- Is it a tram? Is it a train? Is it a bus? Introducing China’s new self-driving... thing
- Nanoparticles are now able to communicate with one another
Technology has advanced quickly in the 21st century, with new and improved machines and electronics being introduced at breakneck speed. New technologies improve healthcare, influence our culture, change the way we travel and completely transform our economy. In this article we will look at four significant breakthroughs in tech last month.
1. Artificial intelligence can now recognise and respond to emotions
Researchers at the University of Cambridge developed artificial intelligence that can see when someone experiences pain. This specially developed machine learning algorithm can detect early painful conditions by reading and assessing distinct facial features on - in this case - a sheep’s face. Their research was presented at the International Conference on Automatic Face and Gesture Recognition in Washington last month.
To develop the Sheep Pain Facial Expression Scale (SPFES) system, the researchers exposed the algorithm to a dataset of 500 photos taken of sheep, teaching the AI to identify various distinct facial features a sheep displays when it’s in pain. The algorithm was then taught to rank these on a scale of 1 to 10, assessing the severity of the pain. Tests indicated that the system was able to identify pain levels with an accuracy of 80 percent.
Already in 1872, in his book, The Expression of Emotions in Man and Animals, Darwin claimed that humans and many animals display emotion through remarkably similar behaviours. And there is a strong similarity between the facial expressions of sheep and those in humans when they are in pain. Eventually, the SPFES system could be used to read emotions from human faces as well. The team’s aim is to teach the system how to read facial expressions in sheep from moving images, even when the animal is not directly facing the camera. Capable of early detection of painful conditions that need immediate attention, the algorithm - even in its current state - could already be used to improve the quality of life of livestock.
2. New voice imitation algorithm can mimic speech and even add emotion
The Montreal-based artificial intelligence startup Lyrebird – yes, named after the bird known for its ability to mimic sounds from its environment – has recently created a voice imitation algorithm. Not only can the deep learning technology mimic the speech of a real human, it can also alter the emotional intonation, all with just one audio snippet. The Lyrebird API compresses the ‘DNA’ of a 1-minute voice recording into a unique key. This key can then be used to generate any type of audio with its corresponding voice. Emotions in the generated voice, such as sympathy, stress and anger, can also be manipulated.
Their public demo consists of algorithm-fabricated voice samples of Obama, Trump and Hillary Clinton, as well as a completely fabricated discussion between the three.
Fabricated discussion between Trump, Obama and Clinton:
Lyrebird’s intention is to publicly release the audio mimicry API, enabling people to use the technology as they please. In an era where fake news is spiraling out of control, this breakthrough, however fascinating, may however make matters even worse. This development could have serious consequences in terms of fraud and identity theft as anyone will be able to generate a ‘voice recording’ of any person with relative ease.
Alexandre de Brébisson, one of the PhD students developing the technology, looks at it a little differently. He actually wants people to become aware of the existence of the technology and realise that audio recordings are no longer reliable or even authentic, as they can now easily be fabricated and manipulated. The startup released the following statement on their website:
Voice recordings are currently considered as strong pieces of evidence in our societies and in particular in jurisdictions of many countries. Our technology questions the validity of such evidence as it allows to easily manipulate audio recordings. By releasing our technology publicly and making it available to anyone, we want to ensure that there will be no such risks. We hope that everyone will soon be aware that such technology exists and that copying the voice of someone else is possible. More generally, we want to raise attention about the lack of evidence that audio recordings may represent in the near future.
Whether the Lyrebird API will be free or not is not yet clear. Initial samples or simple features could initially be free, after which the startup could introduce a freemium API.
The voice samples are still quite metallic sounding as the one-minute samples used for the demo recordings do not contain all the DNA of the voice. More data would significantly improve the quality. The Lyrebird team believes that, in a matter of years, perfect vocal speech synthesis – indistinguishable from the real thing – will be possible. Since their release, the demos have attracted thousands of website visitors and the startup has also been contacted by “several famous investors”.
The technology has many potential future applications. Think audio book readings, digital assistants, speech synthesis for people with speech disabilities, video and movie animations and navigation systems with famous voices.
3. Is it a tram? Is it a train? Is it a bus? Introducing China’s new self-driving…thing
Whether it’s buses, trucks, taxis or boats, many companies are working on making existing modes of transportation autonomous. One Chinese company, however, is taking the development of self-driving technology a step further. To speed up public transportation and to offer affordable mass transit for cities lacking the budgets to build tram tracks or subways, the streets in the city of Zhuzhou in the Hunan Province, will soon see vehicles that resemble a cross between a train, tram and bus.
Chinese rail-maker CRRC Zhuzhou Locomotive didn’t think that turning existing forms of transportation into autonomous versions was enough, so they invented a brand new type of self-driving vehicle: the ART, or Autonomous Rail Rapid Transit. The ART is a tram, train and bus, all rolled into one. It’s modular – like a train, it drives on roads – like a bus, but only in designated areas – like a tram. The fully electric vehicle is fitted with a suite of sensors to detect road dimensions and avoid obstacles and it follows white painted road markings. To accommodate varying numbers of passengers, carriages can either be removed or added. The transit system can reach speeds of 70 km per hour and can travel over 25 kilometres on one ten-minute charge. A 3-carriage ART is just over 30 metres long and can carry up to 307 passengers. Although the ‘thing’ is self-driving, it will have a human driver on board in the initial stages, just in case. CRRC plans to deploy the first ART system in 2018.
4. Nanoparticles are now able to communicate with one another
Nanotechnology’s biggest stumbling block is that nanobots aren’t (yet) able to properly communicate with each other. If they are to really work together, which is the ultimate goal, communication is critical. Josep Miquel Jornet, from the University at Buffalo in New York, said “A nanomachine by itself cannot do much. Just like you can do many more things if you connect your computer to the Internet, nanomachines will be able to do many, many more things if they are able to interact.”
Current technology that enables nanotech to communicate still relies on bulky components, but researchers from the Complutense University of Madrid have managed to create artificial nanoparticles that use chemical signals to communicate with each other. To establish communication at the nanometric level, they actually took their cue from nature. Professor Reynaldo Villalonga and his colleagues studied how bacteria and cells use chemicals to communicate and they managed to teach nanoparticles how to do the same.
In their research, they coated particles with reactive materials that underwent chemical transformations as they interacted with each other. This led to the particles releasing a type of dye, indicating that the communication had been successful. The method still needs to be tested inside a human body, but the breakthrough certainly shows great potential in developing technology that can detect - and ultimately, prevent - disease. Villalonga’s dream is to create an autonomous nanobot that can be used to fight cancer. Their breakthrough certainly is one of the first steps towards that goal.
Technology is inescapable; it permeates every aspect of our lives. It changes how we work and play and has created a revolution that will continue to disrupt. And while we can’t really predict what lies ahead, we can see how far we’ve come in merely the past decade and realise that what’s the latest and greatest today will be old news tomorrow.