144彩票-官网开户

ForwardX to introduce Visual Autonomous Mobile Robot and Robot Computing System at MODEX 2020

February 132020
ForwardX to introduce Visual Autonomous Mobile Robot and Robot Computing System at MODEX 2020

February 13, 2020 - At MODEX 2020, ForwardX will be introducing their Visual Autonomous Mobile Robot. Rather than relying on LiDAR SLAM (Simultaneous Localization And Mapping), ForwardX’s robots depend on a sensor fusion solution with Computer Vision (CV) being the primary source for localization and obstacle avoidance with LiDAR, Encoder, and IMU data as secondary feedback to the control loop. LiDAR is 2D, so 3D CV technology offers a solution suited for warehouse, distribution center, and cellular manufacturing environments.

Given the proliferation of warehouses, distribution centers, direct to consumer (D2C) business models, fleets of AMRs (Autonomous Mobile Robots), questioning the validity of innovation is worthy of examination. Unlike in prior years when attendees would walk the show floor for days, many are using the MHI MODEX app and visiting for just a single day. For that reason, ForwardX invites AMR decision-makers to Booth 1207.

Another introduction that will be on display at MODEX 2020 will be ForwardX’s Edge/Cloud based architecture. At the edge, Robot Computing System (RCS) includes the multi-sensor fusion, computer vision based deep learning, local path planning ability based on V-SLAM and reinforcement learning. In the cloud resides the Robotic Process Manager (RPM). The RPM includes a multi-agent cluster scheduling system. It receives tasks from operators, such as Warehouse Management Systems (WMS), Manufacturing Execution Systems (MES) and ERP (Enterprise Resource Planning).

The robots holistically assign global tasks and intelligent paths for the robots. ForwardX robots depend on V-SLAM with Deep Learning techniques for positioning and navigation. This allows the robot to adapt more readily to changing environments that LiDAR-only based robots cannot handle. The ability to “learn on the fly” allows the big data gathered to predict velocity and direction of obstacles using Deep-Q-Learning and Asynchronous Advantage Actor-Critic techniques. This creates a more intuitive nature in these robots which drives a more collaborative and efficient local path planning.

Furthermore, by analyzing the paths that are typically taken in a facility the neural network becomes more intelligent. This means the robots will start to predict what the next delivery solution might look like and therefore be better prepared to accommodate peaks and troughs of delivery requirements.

MORE PRODUCTS

VIEW ALL

RELATED