Site Loader
Rock Street, San Francisco

PROJECT DETAILSOBJECTIVESTo design and develop a reinforcement learning transportation robot.INTRODUCTIONA reinforcement learning robot learns to operate autonomously from an user’s controls. Atransportation robot is used to transport things from one place to another based on an user’sinput actions. The robot intended to be developed is one that can learn from an user’s inputactions and perform mundane tasks that involve manual control by an user.LITERATURE SURVEYLiterature review:As mentioned by Se-Young Oh (2000) reinforcement learning algorithms are used in vehiclenavigation and precise control. Increased stability with control is possible on high speednavigations. Similar implementations in the load transportation robot can make it possible tonavigate to its destination as fast as possible with great stability.Also as studied by Sridhar Mahadevan and Jonathan Connell (1992) reinforcement learningalgorithms can be used to take feedbacks of positive and negative responses to a performedtask. This can be implemented in the transportation robot to make sure that the accuracy ofthe robot’s task is maximum by a method of giving feedbacks to the robot during the testingphase of the learning algorithm.As the article of Xinran Tao, John R. Wagner the abstract in Heat rejection in ground vehiclepropulsion systems remains a challenge given variations in power train configurations, drivingcycles, and ambient conditions as well as space constraints and available power budgets. Anoptimization strategy is proposed for engine radiator geometry size scaling to minimize thecooling system power consumption while satisfying both the heat removal rate requirementand the radiator dimension size limitationPROPOSED WORK WITH METHODOLOGY1. Development of hardware components of the robot.2. Integration of controller circuits to the actuator.3. Integration of reinforcement learning algorithms into the controller.4. Training and testing the robot for a given load transportation task to improve the accuracyof the reinforcement learning model.5. Field working of the robot.IMPLEMENTATION1. Create CAD files of the model of the robot.2. Find parts from the cad model that are normally available and buy.3. Modify parts that can be modified so that standard parts can replace them.4. 3D print parts of the robot that are not available openly.5. Assemble the parts of the robot.6. Fix the actuators in position.7. Connect the power source to the motors.8. Get a microcontroller and integrate the microcontroller with the actuators and powersource.9. Get image sensors, radar sensors, proximity sensors.10. Mount the sensors in suitable positions.11. Integrate sensors with the microcontroller.12. Program the microcontroller suitably with integration of reinforcement learning algorithms.13. Training, testing and working modes must be made for the robot.14. In the training mode, the robot must be able to transport objects that are loaded, basedon inputs from the remote controller operated by the user, and record it.15. In testing mode, the robot must be able to perform the same task without the full usercontrol. In case of errors, the user must be able to supervise and modify the robot path,making it to learn about incorrect decisions.16. In the working mode the robot ideally performs the user defined task of transporting anobject autonomously, from one point to another, overcoming obstacles like humanintervention and uneven surfaces.WORK PLAN1. To fabricate the body of the robot.2. To power the robot using actuators and power supply.3. To integrate the actuator control to a microcontroller.4. To integrate image sensors, radar sensors and other required sensors for robotintelligence.5. To integrate a remote control system for the robot actuators.6. To use reinforcement machine learning algorithms to make the robot learn for a definedTobject transportation task, from the user controlling with a remote controller.EXPECTED OUTCOME / RESULTSOur ultimate goal is to make a reinforced learned robot that can involve in the works oftransportation of some required load items to the desired locations or to a person. The robothas to be taught about crowd avoidance and other obstacle avoidance, that it should notcollide with any of the objects. By the end we are expected to produce a robot that can movewithout colliding to transport.APPLICATIONSIn case of reinforcement learning robot, there a lot of applications where this can be appliedto teach a robot by different methods of learning. This robot can be used to do carry-on jobsin countless fields. In medical industry, it can be used to carry some medical equipments likesyringes, stethoscope, tablets, clothes, etc., The task of the robot is to detect obstacles in itits way and it should be prevented from colliding with the moving objects. An aestheticenvironment can be attained by the use of such robots which attracts many visitors. Thewheels are to be modified in such a way that the robot must be an able to perform in all-terrain condition making the robot to be fit for some military purposes where it can be used tohandle weapons. It can also be used to serve food in hotels with the assistance of waiters.Robots in industries marks the use of it for carrying some of the equipments like drillingmachine, soldering apparatus, bolts and nuts from one place to another. It can also be usedfor organ transportations from one point to another in a hospital, in case of emergency needs.CONCLUSIONRobots can work for hours without getting exhausted for a long time(depending on the batteryrating used), but humans can’t.As we use a robot for transportation it won’t be late and of course will never get exhausted. Itbecomes an aesthetic value and a good source of model to achieve the transportationpurposes. With further modifications in the robot it can even lift a human body and can travelalong with it. Using more efficient algorithms one can achieve an eminent design of the robotwith much more accuracy.

Post Author: admin

x

Hi!
I'm Glenda!

Would you like to get a custom essay? How about receiving a customized one?

Check it out