Eblogtip.com
  • Categories
    • News
    • Technology
    • Domains
    • Hosting
    • Promotions

Archives

  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • December 2022

Categories

  • News
  • Technology
  • Uncategorized
eBlogTip
  • Categories
    • News
    • Technology
    • Domains
    • Hosting
    • Promotions
  • Technology

Human toddlers are inspiring new approaches to robot learning

  • August 8, 2023
Total
0
Shares
0
0
0

It’s an exciting time for robotic learning. Organizations have spent decades building complex datasets and pioneering different ways to teach systems to perform new tasks. It seems we’re on the cusp of some real breakthroughs when it comes to deploying technology that can adapt and learn on the fly.

The past year, we’ve seen a large number of fascinating studies. Take VRB (Vision-Robotics Bridge), which Carnegie Mellon University showcased back in June. The system is capable of applying learnings from YouTube videos to different environments, so a programmer doesn’t have to account for every possible variation.

Last month, Google’s DeepMind robotics team showed off its own impressive work, in the form of RT-2 (Robotic Transformer 2). The system is able to abstract away minutia of performing a task. In the example given, telling a robot to throw away a piece of trash doesn’t require a programmer to teach the robot to identify specific pieces of trash, pick it up and throw it away in order to perform a seemingly simple (for humans, at least) task.

Additional research highlighted by CMU this week compares its work to early-stage human learning. Specifically, the robotic AI agent is compared to a three-year-old toddler. Putting context, the level of learning is broken up into two categories — active and passive learning.

Passive learning in this instance is teaching a system to perform a task by showing it videos or training it on the aforementioned datasets. Active learning is exactly what it sounds like — going out and performing a task and adjusting until you get it right.

RoboAgent, which is a joint effort between CMU and Meta AI (yes, that Meta), combines these two types of learning, much as a human would. Here that means observing tasks being performed via the internet, coupled with active learning by way of remotely teleoperating the robot. According to the team, the system is able to take learnings from one environment and apply them to another, similar to the VRB system mentioned above.

“An agent capable of this sort of learning moves us closer to a general robot that can complete a variety of tasks in diverse unseen settings and continually evolve as it gathers more experiences,” Shubham Tulsiani of CMU’s Robotics Institute says. “RoboAgent can quickly train a robot using limited in-domain data while relying primarily on abundantly available free data from the internet to learn a variety of tasks. This could make robots more useful in unstructured settings like homes, hospitals and other public spaces.”

One of the cooler bits of all of this is the fact that the dataset is open source and universally accessible. It’s also designed to be used with readily available, off-the-shelf robotics hardware, meaning researchers and companies alike can both utilize and build out a growing trove of robot data and skills.

“RoboAgents are capable of much richer complexity of skills than what others have achieved,” says the Robotics Institute’s Abhinav Gupta. “We’ve shown a greater diversity of skills than anything ever achieved by a single real-world robotic agent with efficiency and a scale of generalization to unseen scenarios that is unique.”

Image Credits: CMU

This is all super promising stuff when it comes to building and deploying multipurpose robotics systems with an eye toward eventual general-purpose robots. The goal is to create technology that can move beyond the repetitive machines in highly structured environments that we tend to think of when we think of industrial robots. Actual real-world use and scaling is, of course, a lot easier said than done.

We are much closer to the beginning when it comes to these approaches to robotic learning, but we’re moving through an exciting period for emerging multipurpose systems.


Source link

Total
0
Shares
Share 0
Tweet 0
Pin it 0
Previous Article
  • Technology

Lyft shares pop, then plop, as it predicts slow and steady growth

  • August 8, 2023
View Post
Next Article
  • Technology

Lyft wants to kill surge pricing

  • August 8, 2023
View Post
You May Also Like
View Post
  • Technology

OpenAI is reportedly raising funds at a valuation of $80 billion to $90 billion

  • September 26, 2023
View Post
  • Technology

YouTube relaxes advertiser-friendly guidelines around controversial topics, like abortion, abuse and eating disorders

  • September 26, 2023
View Post
  • Technology

FCC announces plans to reinstate net neutrality

  • September 26, 2023
View Post
  • Technology

Open Robotics’ ROS is in safe grippers with Alphabet, says Intrinsic

  • September 26, 2023
View Post
  • Technology

Sierra Space raises $290M at a $5.3B valuation

  • September 26, 2023
View Post
  • Technology

Why Solana, Polygon and Aptos expect the enterprise to drive mass adoption

  • September 26, 2023
View Post
  • Technology

TechCrunch+ Roundup: Slashing SaaS costs, FedNow’s ‘game changer,’ diverse cap tables

  • September 26, 2023
View Post
  • Technology

Deepfake election risks trigger EU call for more generative AI safeguards

  • September 26, 2023

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

eBlogTip.com
  • Categories

Input your search keywords and press Enter.