How AI is Used in Robotics
Artificial intelligence is one of those odd things that people either embrace fully and see all the possibilities of, or they reject it wholeheartedly as something evil. We’ve seen horror movies that depict AI as a terrible thing designed to let computers and robots take over the world, but what if the reality is somewhere in between? What if AI and robots can work together to take over, but only in the areas that we really need them in? In this article, you’ll see how AI is used in robotics.
That’s the idea behind integrating robots and AI. After all, it can become tedious to code everything into a robot’s “brain.” With AI, it can continue to learn once you teach it a few basic things. We already see a drastic increase in how much AI is used in our daily lives, even in robotics.
How We Currently Use AI with Robotics
AI and robotics are already being used together in a myriad of ways. Let’s take a look at some of the more common ones before getting to the big one, self-driving vehicles.
・Assembly and Packaging
In factories, you’ll frequently find robots managing the assembly of products, as well as the packaging for sale. These processes were originally done by simple robots or machines that were programmed to manage just one task at a time. Now, however, AI can learn from what it’s doing and refine the moves to make the entire process simpler and more efficient.
You can find AI robots throughout retail stores and even hotels in some places in the world. They’re designed to learn language processing so they can interact easily with customers and work with them. As they help more customers, they get better and better and speaking and aiding. The robots become more human over time, so customers are more comfortable with them.
Some people are using robots that can be taught to do simple chores in agriculture. The robots can easily learn new skills and tasks without being programmed for each one, making them more useful on the farm. While this is not widespread yet, there are whispers that it could become a bigger project, and it’s just one way how AI is used in robotics.
Self-Driving Cars and AI
When you think of the ultimate robot/AI combination, you probably think of auto-driving cars. They’re not perfect yet, but Elon Musk has been working toward this goal for years now, and artificial intelligence is a very important part of it. It’s impossible to program every single possible issue into a vehicle, but when a car encounters a new problem, AI not only learns how to deal with it, all cars on the same system learn.
It’s estimated that there will be around 4.5 million self-driving cars in the United States by 2035. That’s still a small percentage of all cars, but it’s something that more and more people are coming to terms with. The autonomous vehicle industry is growing by roughly 16% around the world on an annual basis.
According to Nissan News, 55% of small businesses feel that fleets will be completely self-driven within the next two decades. If that’s the case, the roads could be a lot safer than they are now, but it also means that the learning curve needs to be fast. The AI installed in these cars has to learn exactly what is acceptable when driving and how to manage the potential issues that human error causes.
Since people will probably never give up the wheel completely, there will be room for mistakes for a very long time to come. AI has to adapt to that, and this is where the ability to discern hidden agendas and intent comes in handy. When a robotic car can determine the driver’s intent next to you, the car can automatically react to avoid an accident. The vehicles tend to be faster at responding to potential threats already, but that will only improve over time.
Why Robots Need to Learn About Intent
We can teach robots all day how to do tasks and predict what someone will do, but they’re logical at best. That means when a robot predicts something, they’re basing it on what they’ve previously learned and how they perceive the world, much as humans do. However, they tend not to take into consideration what the person’s intent is. That’s what computer scientists at the Stanford Center for Human-Centered Artificial Intelligence are studying now. Dorsa Sadigh and Chelsea Finn have put together a team to look at the way robots react to other things and people around them. Rather than looking at every possible action and trying to provide a potential solution for each, the robots actually need to consider what is behind the movements.
Think about chess, for example. You can teach a robot or computer all the different combinations of moves, and it can learn from people, as well. At the moment, a chess robot will plot every possible move and then selects the move most likely to win. That’s fine for chess, but you need something simpler in real life, such as with an automated car.
The computer scientists realized that instead of expanding what a robot knows, it really needs to focus on hidden intentions. The robot learns to influence other machines in a give-and-take process. This is a more human strategy that is referred to as the latent strategy. We don’t tend to mull over every single potential move as we walk toward someone on the sidewalk. We just consider their primary objective and react to that. For example, if someone is walking toward you but also moving toward a store, you will assume they’re headed for the store and act accordingly, moving to let them pass or walking around them. There’s no need to calculate every single movement, no matter how small. It’s an excellent indication of how AI is used in robotics.
This robotic AI application is not yet perfected and will need a lot of fine-tuning before it’s ready for the real world. The ability to anticipate moves and decisions that are made with hidden agendas in mind will help robots work better and eventually become more useful to us.