Disclaimer: This is an example of a student written assignment.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Intelligent Robot Control System Using VR 🎓

✓ Paper Type: Free Assignment ✓ Study Level: University / Undergraduate
✓ Wordcount: 3477 words ✓ Published: 6th Jan 2022

Reference this

Abstract

With the advancement in robotics, the expectations of its users are also increasing. To match the expectation of adding ease and comfort to a task virtual reality is combined with robotics. This will also lead to an efficient human-robot interaction as well as its interaction with its environment. This system describes the three bases of the intelligent control system. These three bases are stereo vision, virtual reality, and the controller. The paper describes a special robot programming language, as well as the simulation of contact forces, is described.

Contents

Introduction:................................................................................... 2

Evaluation of the field – technologies:......................................... 2

Technology (Control system)........................................................ 2

A. Stereo vision system................................................................. 2

B. Virtual Reality system................................................................ 2

C. Grasp simulation....................................................................... 4

D. The SRPS Language.................................................................. 4

Projects Related to it:..................................................................... 4

 Applications in the manufacturing industry............................. 4

 Applications in the healthcare industry.................................... 5

 Applications in military robotics................................................ 5

Social, legal and ethical impact:.................................................... 5

Economic and industrial impact:................................................... 6

Environmental impact:.................................................................... 6

Discussion....................................................................................... 6

Challenges, dangers, and opportunities....................................... 6

Conclusions..................................................................................... 7

Reference......................................................................................... 7

Introduction

The intelligent robot control system is a system that interacts with the environment. This control system is establishes on three main bases. The first in the list is the robot controller which is responsible for control signals sent to the actuators, path planning, and object grasping. The next one is the stereo vision system, which is used to gain information about the surrounding of the robot the feeds it to the other two bases. The output generated here consists of the type of recognized object and their relative orientations and positions. The final base the virtual reality is the bridge between the user and the robot. It can display the planned work before it takes place. There are two types of visualization output: an overlaid image from the real camera image and a user-defined viewpoint. A robot's task is described by two-level language: The higher level (indirect commands based on the sensor's output) and the lower level (the direct command).

Evaluation of the field – technologies

Technology (Control system)

A. Stereo vision system

The stereo vision system can not only be used to detect the position and orientation of the object but also in grasping and planning. The stereo method has four main steps.

  1. Image processing: This step takes care of feature detection and extraction in the camera image and solves the matching problem.
  2. Reconstruction of projective structure: This step is responsible for the 3D projective structure of the scene.
  3. Object recognition: Mapping the features extracted, we can generate a view of the object. Using the same features, 3D coordinates are assigned for the same.
  4. Calculation of Euclidean transformation: Using the recognized features and the same features in the database, we can determine the 3D coordinates of the image points in the scene.

B. Virtual Reality system

The virtual reality can be used in the visualization of the actions of the robot based on the human operators in real-time. It is also a necessary condition that robot that the robot doesn't clash with itself or the objects in the environment. Calibration is required in order to relate the virtual world with the real environment. Even the cameras used aren't calibrated, so to calibrate them the parameters of a camera are identified based on pictures taken by a real camera and used in the visualization stage.

Collision detection: Because of a dynamically changing environment, the robot cannot have a fixed planned trajectory. So, we need something that can determine the path and whether it's followed in real-time. To do that path points can be fed into a virtual system with a virtual robot without having to disturb the real one. The algorithm used is hierarchical. It conducts the low-precision tests before upgrading to more accurate ones with higher computational costs. The number of possible pairwise test case scenarios for collision detection are as follows:

  • The neighboring segments of a robot might collide with each other.
  • A movable object tested against other objects.
  • The object that cannot collide with the robot as it is far away or not in the path
  • The items that can be merged into a single identity like a gripper.

The levels of collision detection are as follows:

Axis-parallel bounding boxes

In this level, each object's environment has a box whose edges are parallel to the axes of the world reference frame. The next step is to test these boxes against each other. These tests are based entirely upon comparison-type operations, which increases the computational cost leading to a reduction in accuracy.

Arbitrary (object-oriented) bounding boxes

This algorithm finds a separating plane between two boxes that are projected to an arbitrary line. If these intervals are disjoint then the boxes are not touching each other and the line separating them is called separating axis. If these intervals aren't disjoint then the boxes overlap with each other. Now, the question is how to test these? To answer that we need to understand the following. If the boxes are disjoint then there will be two cases, i.e. if they are on top of each other or are side by side. If the boxes are on top of each other then the separating axis is perpendicular to edges and if they are side by side then the separating axis is perpendicular to the face of the box. Once we know the boxes overlap and if the intervals overlap then additional lines for testing are required to find the separating line. If the separating line is not found then the boxes overlap. This algorithm was beaten in speed by a quicker algorithm called radius test. This algorithm projects centers of the boxes to their axis and their radius of the projected interval are summed up and if that is smaller then the distance between the centers of the boxes than the boxes don't overlap.

Arbitrarily oriented triangles

The prior algorithms don't give accurate results but the controlling system of the robot may have its own error leading to deviation in the path, so sometimes getting accurate results becomes unnecessary. However we do need accurate results, the intersection of the triangles can be used in testing. Testing this algorithm is a two-level process:

  • If the intersection points of one box are one side of the points generated by the other one then, they are not overlapping.
  • If one of the edges intersect then, they are overlapping.

The next step is to carry out tests for the boxes who have an intersecting triangle.

Calibration

The model and environment generated should resemble a real-life robot as much as possible so that the operator can control the real robot safely and easily. The resemblance, as well as the fact that the virtual robot immediately follows the given trajectory helps the operator is visualizing and dodge away any occasional error in time.

C. Grasp simulation

Grasp simulation helps with the information of whether the dextrous hand is holding the object or not. The grasping is described only using a geometric point of view. Some assumptions and geometric properties have been defined, these will help in the simulation of the forces between the robotics hand and the object that is to be grasped. Here, the model software takes care of triangles only because the normal vector is the most important parameter for contact force and the normal vector is constant in a triangle.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

Surface modeling

As discussed before we define certain assumptions and geometric properties. The proposed system uses B-Rep or boundary representation to describe an object. For grasps computation purposes we use a sphere to describe the fingertip and for visualization, B-Rep can be considered. The assumption taken is that at a point of time the fingertip is interacting with at most one triangle of the object. Furthermore, if it's not just the fingertip that's interacting with the object then this means there are other parts of a hand touching the object then, the grasp can be taken as a form of closure.

D. The SRPS Language

SRPS language is a programming language that can effectively describe complex motions. SRPS language consists of SRPS commands that can help the user in predetermining a command. Some examples of SRPS commands are LOAD, GO, GOIN, GRASP, PINCH, SNAP, GRIP, OPEN, etc.

Projects Related to it

Applications in the manufacturing industry

The manufacturing industry has lots of labour work which includes heavy lifting and a series of redundant tasks. These characteristics of the industry is the reason why this industry has one of the highest applications of robotics. Adding this hybrid technology to this industry can help in making their task extremely easier. In 2016, an engineering team from Silicon Valley demonstrated a robotic arm controlled by virtual reality. This allowed the user to control a robotic arm using a VR headset used as a "virtual handle" to create a link between the virtual and real world. This can be used for lifting heavy objects with a robot using manual control or to train a robot efficiently to do repetitive tasks. While adding swarm robotics in the current combination of Virtual reality and robotics, one can control all the swarm robots individually just by selecting them using a VR headset and commanding them using the same. This will help in efficiently speeding up the manufacturing process.

Applications in the healthcare industry

Dealing with the health care industry leads to a lot of human-robot interactions. To introduce a robot in this industry requires a high level of accuracy and user-friendly design. A charity group in the U.K created a humanoid robot that provides telepresence capabilities for kids of Great Ormond Street Hospital in London. This robot helps kids in virtually exploring the zoo. The catch is that these children use VR headset to control and move the robot in real-time which gives them a first-person perspective as they explore the park. This technology can also be used by people with disabilities to easily do their routine tasks using VR linked robot. The application of this in the healthcare industry as the proposed technology has a user-friendly and easy-going approach.

Applications in military robotics

Military robotics has to deal with a risky environment and requires precession in decision-making tasks like firing of bullet or locating base and people in need using drones. Kronstadt Group from Russia came up with an idea of a "virtual battlefield" for their army. The idea was to not only help in training the soldiers on their own but also to test the robots and drones using virtual reality. Furthermore, the Russian army has now started testing VR headset which will allow soldiers to control robots and drones. By controlling robots they can send in robots to the battlefield while controlling them virtually from a base. They used head movements to navigate the drone using VR headset, for example, they turned their head to the turn the drone and moved their head up to make the drone go higher and looked down to make the drone go to a lower altitude.

Social, legal and ethical impact

Security and Privacy

This being a connected system, one of the major concern is data security. In order to understand the problem or the situation is given and act according to it, the assistive or military robots will have to access previous data. Even adapting to the environment and giving out efficient responsiveness requires drawing out information that might be confidential. Hence, there is a high risk of a leak.

Transparency

Transparency between the user and the robot is required. The user should know why the robot is taking specific information or conducting specific tests. The system has to give an easy to understand manual to give an account of its everyday tasks and conditions that a user can agree or disagree with.

Social attachment

Over time, there have been a lot of advances in technology that has gifted us with a great level of comfort and because of that people have started forming an emotional bond with these technologies. Hence. We need to consider the design of these systems such that they don't hinder other relations of their lives.

Autonomy of decision making

As the complexity of the system increases, the expectations laid out on the robots and their critical decision making increases. Robots will have to be designed in a way that they analyze the whole situation in which they are working and produce their output as the best path of action. They also need to consider the moral aspects of the decision they make.

Respect, dignity, and right to human care: Human experience of a robot is an important factor to be considered and robots consider humans as a recipient rather than a living individual. For this reason, it's necessary that we have a direct human touch in the system alongside VR and robotics mechanisms at least in the domain where they are working on humans. One of the examples of this being the health industry.

Impact on carers

There is a high level of human skills required to perform everyday care activities. Robots can only assist a person is implementing those tasks but to perform them requires a human touch. Also, the feelings aspect like empathy and understandings are unique abilities of humans that AI can't replicate. So, robots won't be winning over the job of a caretaker in the near future but they can help and assist the caretaker making their tasks easy and comfortable as compared to what it is right now.

Economic and industrial impact

Employment

With the increase of technology in robotics, the comfort level has been increasing. Because of this robots are gaining lots of acceptance in industries and taking up a lot of spaces in jobs that were once reserved for humans. Robots are not only making things work more accurate but also hitting the time frame, because of which company are getting huge benefits as their production cost decreases due to less number of labour work. Hence companies nowadays, are supporting more use of machines than a human which is creating a lot of unemployment in labour-based low paying jobs by inducing automation. But on a brighter side, robots are creating a lot of high paying jobs for skilled people, which consists of training and quality-related task.

Quality of production

Robots have increased the productivity of goods in the industry by enforcing automation to the system. But that's not all, robots have increased the quality of goods produced by the industry as they have high accuracy and trained for that specific task. Even human labour required for a task has decreased. For example, previously it took three people to lift an object now that can be achieved by one person and VR control controlling a real-life robot doing the task for them.

Risks

There is always some kind of risk in the industry, by using robots there won't be any human life at risk. For example, instead of sending a spy who will risk his own life in the field, we can send in a drone that is linked with VR so the maximum loss the military will incur will be of a drone but life will be saved. The same goes for the manufacturing industry, there won't be a risk of losing human life during industrial hazards like gas leak, explosion or machine malfunction.

Environmental impact

As the Robots are advancing, there is a high increase in the number of robots used in every industry. With the increase in robots, the cost of pollution is increasing as previously those jobs were done by humans and now with robots, there is a sharp increase in noise as well as air pollution and water pollution caused due to increase in production of industrial waste caused by them.

Robots are made out of raw materials like metals. Most of the robots use steel which extremely is resource consuming. Some of the strong and light metals like titanium and other exotic forged metal are tend to produce even more energy out of which most is creating pollution. And hence increasing the emission of greenhouse gases responsible for causing climate change.

Discussion

Challenges, dangers, and opportunities

Few major challenges of developing this system are as follows:

  • Cost: The price of the project is all is high as the production of virtual reality gear is expensive and there isn't a high competition of a VR producer in the industry.
  • This technology is no doubt cutting edge technology but there is a lack of viable cost-effective models available in the industry.
  • One of the major uncertainty faced by the VR industry is the possible health effects on a user.
  • VR has an insufficient amount of content and it needs to be mobile as most of its freedom is taken away by the cords dangling on it.

Conclusions

VR systems can be used to generate graphical information for the user. This information can help in predicting robotic behavior in achieving the desired trajectory, interacting with the environment and preventing collision beforehand by foreseeing them. The main advantage of this system is that this system is easy to understand and extremely effective. The use of this system can make jobs easy and in my opinion, it's also a fun way to finish a dull boring job.

References

1. E. Toth and F. Tel, "Calibrated virtual reality supported by stereo vision in intelligent robot control system," ISIE '99. Proceedings of the IEEE International Symposium on Industrial Electronics (Cat. No.99TH8465), Bled, Slovenia, 1999, pp. 287-292 vol.1.

2. A. Tikanmäki, T. Bedrník, R. Raveendran and J. Röning, "The remote operation and environment reconstruction of outdoor mobile robots using virtual reality," 2017 IEEE International Conference on Mechatronics and Automation (ICMA), Takamatsu, 2017, pp. 1526-1531.

3. Z. Xuhui, D. Runlin and L. Yongwei, "VR-based remote control system for rescue detection robot in coal mine," 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jeju, 2017, pp. 863-867.

4. B. Moon et al., "Connecting motion control mobile robot and VR content," 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jeju, 2017, pp. 355-359.

5. A. Cardoso et al., "VRCEMIG: A virtual reality system for real time control of electric substations," 2013 IEEE Virtual Reality (VR), Lake Buena Vista, FL, 2013, pp. 165-166.

6. B. Zhang, A. Suzuki and H. Lim, "Development of Sensitive Glove Type Wearable Robot System," 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 2019, pp. 1581-1582.

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this assignment and no longer wish to have your work published on UKEssays.com then please: