How one KITE scientist is changing the way robots think. Literally.
It’s morning, and an older woman who is recovering from a stroke sits at the kitchen table. Her adult son, who has temporarily moved into the family home to help, makes coffee for his mother before heading to work, placing the mug on the table as he leaves. The woman can’t wait for that first sip, but there’s a problem: her stroke has caused some weakness on one side of her body, which makes it difficult for her to lift and hold things, like her morning brew.
Imagine if, attached to the table, was a robotic arm that “sees” the coffee-filled mug through its cameras. Using neural networks and artificial intelligence (AI) to process the visual information, it recognizes the coffee. Then, another algorithm, trained using reinforcement learning, autonomously controls the robotic arm to pick up the mug and pass it without spilling.
While this scenario isn’t yet possible, Dr. Brokoslaw Laschowski hopes it will be soon. Why? The Research Scientist and Principal Investigator at the KITE Research Institute at University Health Network (UHN) is working to create intelligent robots to help people with physical disabilities. The novelty of his robotics research stems from his expertise in computational neuroscience.
“I research brain-inspired algorithms and brain-machine interfaces,” he explains. “With regards to the first, I develop machine learning algorithms to mathematically model human perception, cognition, and motor control. These ‘artificial brains’ can then allow robots to think and control themselves.”
While AI has existed for half a century, and deep learning (a technology designed to recreate how the human brain processes information and is considered the foundation of modern AI) was developed in the 1980s, integrating AI into the physical world is a problem that computer scientists are still trying to solve.
“Even the most sophisticated robots developed to date, such as the Boston Dynamics backflip robot, aren’t fully autonomous,” he adds. “It’s one of the grand challenges we face as a field – bringing artificial intelligence into the real world.”
Understanding the brain
From his undergrad and master’s degrees in neuroscience and kinesiology to a second master’s and PhD in engineering, Dr. Laschowski has always been captivated by how the brain works. For instance, during his second master’s degree in mechanical and mechatronics engineering at the University of Waterloo, he developed mathematical models based on optimal control theory to model how the brain optimizes and controls human movement.
“I love research. The exploration and creation of new theories and technology – it’s addictive. I’m completely obsessed,” he says. “I didn’t just want to study neuroscience at the cellular level; I’m interested in mathematically modelling how the neural computations in the brain see, think, and control movement – an area known as computational neuroscience.”
Dr. Laschowski began to wonder how he might apply his mathematical and machine-learning models of the brain to robotics. After seeing someone walking with an early version of a robotic prosthetic leg, he immediately knew his next area of focus: giving intelligence to robots that physically assist humans.
“I thought to myself, ‘I understand a little bit about how the human brain controls a biological leg. What if we created an artificial brain to control a robotic leg?’” he explains. “That’s how it all started.”
Training robots to see
Dr. Laschowski, who is a Core Faculty Member of the Robotics Institute at the University of Toronto (U of T) and Director of the Neural Robotics Lab, has received considerable media attention for his robotics research, such as being featured in a keynote talk by the President and CEO of NVIDIA. However, he emphasizes that he specializes in developing the “brains” that power robotic systems rather than the physical robots themselves.
A key component of that process is giving robots the ability to see, which involves far more than cameras. If you think about how humans see, the eyes are only the first step of the process – the sensors that take in visual information. The visual cortex and other parts of the brain are then responsible for processing and understanding that information.
Although cameras are the first step in giving robots “vision,” they still require an artificial brain to process the visual information. Over the years, Dr. Laschowski and his lab have developed neural networks that mimic how neurons in the human brain work in order to process visual information from cameras like the ones mounted on their custom-built smart glasses. These neural networks then become the foundation with which robots learn to perform movements in the real world, such as walking down the street or picking up a coffee cup.
In addition to modelling how the human brain processes visual information, Dr. Laschowski is also developing models to reverse engineer how the brain controls movement. These brain-inspired controllers can be programmed into robots to behave similarly to humans.
Training robots to think
Dr. Laschowski likens the training of his learning algorithms to his new role as a father, providing his young daughter with a safe environment in which she can learn to walk and feed herself. “When we train our reinforcement learning algorithms, there’s a certain amount of exploration. The artificial brain, like the human brain, learns by interacting with the environment and experiencing failures and rewards,” he says.
In humans, the neural networks in the motor cortex and other parts of the brain are responsible for motor control, such as when you pick up a coffee cup. “One of the goals of my research is to mathematically model these neural computations, which, in addition to advancing our scientific understanding of the brain, can also be used to direct robots to move like humans,” he explains.
Researchers at the Neural Robotics Lab use computer simulations to teach their reinforcement learning algorithms to mimic human movements in an effort to reverse engineer human motor control. Within the simulation, their algorithms learn these brain-inspired control policies. Once the researchers recreate human-like movements on the computer, they transfer their brain-inspired controllers from the simulation to physical robots.
However, Dr. Laschowski stresses his research is mainly mathematical and computer-based, noting that people often assume he’s the one building physical robots. “I don’t even have a screwdriver in my lab,” he says, laughing.
The vision and brain technology his lab is developing can be applied to many different machines, such as prosthetics, robotic arms, humanoids, and exoskeletons. For instance, imagine a boy with a spinal cord injury who uses a robotic exoskeleton for walking assistance. He could wear the lab’s smart glasses, which then connect wirelessly to the robot.
“As he’s looking around while wearing our glasses, this system, which is the boy and the robotic exoskeleton together, acts like an autonomous car, where now the robot could control itself because the neural networks on our smart glasses have the ability to see,” Dr. Laschowski explains.
The same boy could instead choose to use the lab’s brain-machine interface to control the robot with his thoughts, such that the lab is also developing algorithms to decode movement intent from brain recordings. Alternatively, a hybrid of the two technologies – where sometimes the artificial brain has autonomous control, but the boy could take over manual control when he wants by using the brain-machine interface – could be the best option.
“The long-term vision for my research is to study, along this spectrum of autonomy, what level of control individual people prefer, which remains one of the major unsolved research questions in the field,” he says.
Ultimately, Dr. Laschowski is working diligently to give robots intelligence by developing artificial brains and interfacing with the human brain. He hopes these intelligent robots will soon be used to assist people with physical disabilities in the real world, just like that older woman who, despite her stroke, wants to enjoy her morning coffee.
This Is KITE is a storytelling series that aims to excite and inspire audiences as well showcase the Institute’s people, discoveries and impressive range of research. The campaign will feature monthly stories and videos that chronicle key projects under KITE’s three pillars of research: Prevention, Restoration of Function, and Independent Living/Community Integration.