It's Discount Week! 🎁 Pick up new Marketplace gear at a discount when you donate today! Get My Gear!
Solving an old equation brings a new wave of AI
Dec 6, 2022

Solving an old equation brings a new wave of AI

HTML EMBED:
COPY
MIT researcher Ramin Hasani solved the computation-intensive differential equation, which enables an algorithm system that can adapt to evolving patterns.

Researchers at the Massachusetts Institute of Technology have solved a particularly challenging differential equation that dates back to the early 1900s.

The explanation gets pretty technical pretty fast, but the point is that solving this equation enabled researchers to create a new type of artificial intelligence system that can learn on the spot and adapt to changing patterns, as opposed to traditional systems in which the machine learning is based on existing patterns or expected outcomes.

Marketplace’s Kimberly Adams speaks with MIT researcher Ramin Hasani, who said it’s called a liquid neural network, and it kind of works like a human brain. The following is an edited transcript of their conversation.

Ramin Hasani: So if you want to explain how nervous systems work with mathematical tools, you transfer them into mathematical equations by differential equations. You can say how their behavior is computed as you go forward in time, you can compute and you can say, “What’s the next behavior of these neurons, or nerve cell, or interaction of two neurons, or nerve cells?” So liquid neural networks are a direct inspiration from how nervous systems compute their interaction of information between two nerve cells through a synapse. So the word “liquid” comes from the fact that they are adaptable to input conditions.

Kimberly Adams: When we talk about making a discovery or coming out with this new idea, what does it look like for people who aren’t familiar with this kind of work? Are you releasing a new formula? Are you releasing a new computer program? What are you giving to the scientific community?

Hasani: We provided a new algorithm. This algorithm is based on a solution that models the interaction of two neurons. And now, we showed that this system can solve medical data, they can actually figure out from vital signs of patients, they can figure out an estimate what’s the kind of future status of a patient that is right now in [the intensive care unit]. We show that you can fly drones with them autonomously. You can drive self-driving cars and many more applications. So what we provide the scientific community is exactly liquid neural networks in a more computationally tractable way.

Adams: So basically, you’re giving people the tools to do a lot more computer processing with a lot less actual computing?

Hasani: Correct, and while being closer to the capacities of natural learning system, our brains.

Adams: Where do you think we might see this technology showing up soon?

Hasani: This will enable larger-scale brain simulations. So this closed-form solution enables neuroscientists to discover, to really go beyond the level that they could simulate brains. And now, they might get into new discoveries about how our brain works. And then obviously, the outcome of that would be new tools, new kinds of insights about how brain diseases can be resolved. So we could really go into that spectrum of, like, possible outcome which come as a byproduct of this technology.

Adams: Here on our show, we’ve talked quite a bit about how some AI systems and algorithms come with inherent bias just baked into the models. How are you ensuring that this new algorithm doesn’t adopt some of the worst tendencies of humanity?

Hasani: These closed-form liquid neural networks are controllable and they are understandable because they’re coming from a source that is intuitive to us. When you’re training with data, you can also watch out for biases. There’s a large community of AI scientists that are devoting their time to ensuring AI fairness, AI ethics and AI accountability. And I believe that with liquid neural networks, because they’re more controllable, we would be able to actually advance even on those fronts.

Adams: Research into this kind of stuff can be such a long slog with a lot of dead ends, a lot of starting and stopping and starting over again. How did it feel when you finally landed where you wanted to be?

Hasani: So, solving this differential equation was hard. I want to give you a simple example of why is it hard to compute that differential equation that was originated from 1907. So imagine I tell you to do a multiplication. I tell you to multiply 2×3, which is equal to 6. Now, I ask you to do 2x a black box. I give you a black box, I don’t tell you what it is, and now I ask you, what is the answer to this equation? So 2x a black box. How can you say, “What is the answer to this question?”? So basically, that was the kind of challenge that I had to deal with. I had to understand a way, find out how to get more insights about this black box to be able to perform those computations. And then, to open in that black box, I had to go through a large number of simulations failed in mathematics that I was, like, writing down equations, I was getting into dead ends. And then I arrived at a point where I wanted to stop solving the problem. And then actually stopped because my brain didn’t work anymore at that time of the night.

And then, on my way back home, I was thinking, “What if I do an assumption on something, and then maybe this assumption is going to help solving this problem?” I was, like, almost two minutes from home. And then I just walked back in the office around 9:30 pm. And then, I started, like, continuing writing this equation. Then all of a sudden, some nice format of this equation popped up, that I was so excited that I simulated this. And I see that whether it can mimic the differential equation behave because it’s a solution of that differential equation. So it should be able to mimic that behavior, even in practice. So I tried to plug it in, in simulations. And then I saw: “Wow, they are matching.” It was kind of a eureka moment. And then yeah, so after two years of back and forth review process and everything, the research came out, and it seems to be the right format.

Adams: That must have felt amazing.

Hasani: Yeah. The funny thing is that I wrote this solution in, I think, 2019. And you can imagine, like, a lot of thought went through this process, and then we sent it for peer review. So I wrote this solution on a whiteboard. And then right now the kind of marker is now blocked, so you cannot really wipe it off. So that’s also feeling nice, so that I have the solution from 2019 and it’s like written on a whiteboard on the back of my desk at MIT. And that’s also a great feeling. So whenever I walk into the lab, I actually see that it’s still there. So the solution is still there. It’s pretty cool.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daniel Shin Producer
Jesús Alvarado Associate Producer