Google releases an Open-source AI sound synthesizer while DeepMind trying to really understand AI!

Google releases an Open-source AI sound synthesizer while DeepMind trying to really understand AI!

Google is gearing up for an AI race

Google’s project, Magenta recently released an Open source AI synthesizer called ‘Nsynth Super’ for artists and music enthusiasts. On the other hand, Alphabet’s another subsidiary DeepMind is working on techniques (or study to say least) to really understand ‘how AI make decisions’.

So, Google launched an AI product while its sister company trying to understand AI?

No, not really. Let us explain.

Y’all remember when AI bot coming up with their own conversation made the headlines. Well, DeepMind is clearly trying to avoid it by researching and learning more about how AI make its own decisions (at its core). But, newly released ‘Nsynth Super’, an AI-powered music synthesizer is a well-researched and known piece of AI tech. It’s undeniably safe and sound.

DeepMind – Diving deep

DeepMind, famously known for building an AI-bot which outplayed the GO Champion, is working on a much more deeper level project with the aim of understanding ‘how AI is making decisions’.

And, it’s not easy. Our brains comprise of billions of connected neurons which act concurrently to make decisions. Even the simplest emotions, triggers multiple layers of neurons in an unpredictable way.

Similarly, AI relies on multiple layers of neurons and clusters of mathematical calculations to make decisions. Those neurons often combine and work together in complex and counterintuitive ways to solve a range of problems to finally settle on a decision. As with the technology, there has been a debate going round and round about incapability of scientists and engineers to predict AI decisions.

A few people (hint: owns a rocket company) already placed their opposition on the tech by highlight the unpredictability. In addition, researchers acknowledged that more complex the systems, the harder it is for humans to understand.

By knowing how AI works, DeepMind hopes to build smarter systems.

According to the research, a neural network designed to recognize the pictures of Cat triggers two sets of neurons. One is for interpreting the images and other is a confusing set of neurons, which doesn’t have any impact on the system.

To understand, researchers deleted some neurons to see the effects it has on the whole network. They’ve concluded two things:

1. Neurons that had no obvious preference for images of cats over pictures of any other animal, play as big a role in the learning process as those clearly responding just to images of cats.

2. The networks built on neurons that generalize, rather than simply remembering images they had been previously shown, are more robust.

“Understanding how networks change, will help us to build new networks which memorize less and generalize more and we hope to better understand the inner workings of neural networks, and critically, to use this understanding to build more intelligent and general systems,” researchers said in a blog.

Researchers clearly acknowledged that humans may still need to understand AI better. DeepMind research scientist Ari Morcos told the BBC: “As systems become more advanced we will definitely have to develop new techniques to understand them.

Also read: ‘Neural Networks’ and ‘Deep Learning Algorithms’ are just supporters – Not Creators!

Google Nsynth Super:

Google’s research project Magenta, which explores the possibility of machine learning helping artists create new sounds came up with ‘Nsynth’ (Neutral Synthesizer).

Here’s how Google defines Nsynth – “It’s a machine learning algorithm that uses a deep neural network to learn the characteristics of sounds, and then create a completely new sound based on these characteristics.”

Nsythn doesn’t combine or blend new sound. Instead, it creates new sounds using the acoustic qualities of the original sounds.

How does it work?

  1. Inputting Sounds – You input a stream of sounds as you like (Bass, Flute, Snare, etc.)
  2. Extracting Features – Encoder then extracts the acoustic characteristics of the inputted sounds.
  3. Interpolation – Algorithm then learns the features of the extracted sounds and output a new set of sounds in ML language.
  4. Decoding – Decoder then decodes and output the newly invented sounds.

As a part of this exploration, Google has created the ‘Nsynth Super’, an experimental instrument using the Nsynth AI in collaboration with Google Creative Lab. It is capable of producing over 100,000 new sounds and can take 4 different types of inputs at a time.

The prototype is shared among a small group of musicians to better understand how they might use this for creative processes.

Stay tuned till we get the detailed reports on prototype and DeepMind’s research paper.

Sharing is Caring. Let your friends know!

Leave a Reply

Your email address will not be published. Required fields are marked *

Ask us Anything
Get your dream job now