patents.google.com

CN109551489A - A kind of control method and device of human body auxiliary robot - Google Patents

  • ️Tue Apr 02 2019
A kind of control method and device of human body auxiliary robot

Technical field

The present invention relates to automation fields, in particular to the control method and dress of a kind of human body auxiliary robot It sets.

Background technique

With the raising of automatic control technology, the robot of human assistance is widely used in each field.Often The field seen such as machine manufacturing field, nursing field.

For nursing field, robot, which more acts on being to aid in, completes certain movements, such as position by nursing staff Mobile, crawl article etc..

It in the related technology, is usually to pass through hand-held remote controler to complete control when controlling robot by nursing staff Instruction is assigned.

Summary of the invention

The purpose of the present invention is to provide a kind of control methods of human body auxiliary robot.

In a first aspect, the embodiment of the invention provides a kind of control methods of human body auxiliary robot, which is characterized in that packet It includes:

Obtain the first human eye perspective data and corresponding ambient image of user;

According to the first human eye perspective data, the object of interest of user is selected from ambient image;

Robot control instruction is generated according to the position of object of interest.

With reference to first aspect, the embodiment of the invention provides the first possible embodiments of first aspect, wherein institute The method of stating acts on human body auxiliary robot, and the human body auxiliary robot includes arm;

Step generates robot control instruction according to the position of object of interest

Arm move is generated according to the position of object of interest and the position of human body auxiliary machinery human arm.

With reference to first aspect, the embodiment of the invention provides second of possible embodiments of first aspect, wherein institute The method of stating acts on human body auxiliary robot, and the human body auxiliary robot includes action portion;Step is according to object of interest Position generates robot control instruction

It is generated according to the position of object of interest and the position of human body auxiliary robot and moves integrally instruction.

With reference to first aspect, the embodiment of the invention provides the third possible embodiments of first aspect, wherein step Suddenly according to the first human eye perspective data, the object of interest of selection user includes: from ambient image

According to the first human eye perspective data, the area-of-interest of user is selected from ambient image;

Select the object being located in area-of-interest as object of interest.

With reference to first aspect, the embodiment of the invention provides the 4th kind of possible embodiments of first aspect, wherein step Suddenly select the object in the area-of-interest to include: as object of interest

If there are multiple candidate targets in area-of-interest, by there are the progress of multiple candidate targets is defeated in area-of-interest Out;

According to the first choice instruction that user is assigned for display screen, select the specified candidate in area-of-interest right As object of interest.

With reference to first aspect, the embodiment of the invention provides the 5th kind of possible embodiments of first aspect, wherein step Suddenly output will be carried out there are multiple candidate targets in area-of-interest includes:

The image of amplified area-of-interest is shown on AR glasses;

Step is instructed according to user for the first choice that display screen is assigned, and selects the specified time in area-of-interest The object is selected to include: as object of interest

It obtains user and observes the second human eye perspective data caused by AR glasses;The first choice instruction is the second human eye Perspective data;

According to the second human eye perspective data, select the specified object in area-of-interest as object of interest.

With reference to first aspect, the embodiment of the invention provides the 6th kind of possible embodiments of first aspect, wherein step Suddenly select the object in the area-of-interest to include: as object of interest

Foreground extraction is carried out to area-of-interest, to determine foreground object;

Reference picture is extracted from target database;

By in foreground object, meet the object of preset requirement as object of interest with the similarity of reference picture.

With reference to first aspect, the embodiment of the invention provides the 7th kind of possible embodiments of first aspect, wherein also Include:

Foreground extraction is carried out to area-of-interest, to determine foreground object;

Reference picture is extracted from target database;

By in foreground object, meet the object of preset requirement as candidate target with the similarity of reference picture.

With reference to first aspect, the embodiment of the invention provides the 8th kind of possible embodiments of first aspect, wherein also Include:

According to the second selection instruction got, the selection target database from candidate data library;Candidate data library includes Home environment database, medical environment database, outdoor environment database.

With reference to first aspect, the embodiment of the invention provides the 9th kind of possible embodiments of first aspect, wherein also Include:

Obtain current location information;

Search location information corresponding with current location information;

The second selection instruction is generated according to location information.

With reference to first aspect, the embodiment of the invention provides the tenth kind of possible embodiments of first aspect, wherein

Second selection instruction is the database selection instruction that user is assigned.

With reference to first aspect, the embodiment of the invention provides a kind of the tenth possible embodiments of first aspect, wherein Step is according to the first human eye perspective data, and the object of interest of selection user includes: from ambient image

According to the first human eye perspective data, the first object to be confirmed is selected from ambient image;

The prompt information of each first object to be confirmed is exported respectively;

If getting the confirmation instruction in response to prompt information, it will confirm that and refer to the first corresponding object conduct to be confirmed Object of interest.

With reference to first aspect, the embodiment of the invention provides the 12nd kind of possible embodiments of first aspect, wherein Step is instructed according to user for the first choice that display screen is assigned, and the specified candidate target in area-of-interest is selected to make Include: for object of interest

Select object corresponding with first choice instruction as the second object to be confirmed;

Each prompt information corresponding with the second object to be confirmed is exported respectively;

If confirmation instruction corresponding with prompt information is got, using corresponding second object to be confirmed as interested Object.

With reference to first aspect, the embodiment of the invention provides the 13rd kind of possible embodiments of first aspect, wherein

Step exports each prompt information corresponding with the first object to be confirmed respectively

Display image information corresponding with first pair of object to be confirmed on a display screen;

And/or play the voice messaging of the title of first pair of object to be confirmed;

Step exports each prompt information corresponding with the second object to be confirmed respectively

Display image information corresponding with second pair of object to be confirmed on a display screen;

And/or play the voice messaging of the title of second pair of object to be confirmed.

With reference to first aspect, the embodiment of the invention provides the 14th kind of possible embodiments of first aspect, wherein After step exports prompt information corresponding with object to be confirmed, further includes:

Obtain user behavior;

If user behavior meets preset criterion behavior requirement, it is determined that get confirmation corresponding with prompt information and refer to It enables.

With reference to first aspect, the embodiment of the invention provides the 15th kind of possible embodiments of first aspect, wherein Criterion behavior requires to include: user completes one or more behaviors identified below:

It blinks, open one's mouth, sticking out one's tongue, blowing, head movement, speech act, eye movement behavior.

Second aspect, the embodiment of the invention also provides a kind of control devices of human body auxiliary robot, comprising:

First obtains module, for obtaining the first human eye perspective data and corresponding ambient image of user;

First choice module, for according to the first human eye perspective data, selection user's to be interested right from ambient image As;

First generation module, for generating robot control instruction according to the position of object of interest.

In conjunction with second aspect, the embodiment of the invention provides the first possible embodiments of second aspect, wherein institute It states device and acts on human body auxiliary robot, the human body auxiliary robot includes arm;

First generation module includes:

First generation unit, for generating hand according to the position of object of interest and the position of human body auxiliary machinery human arm Arm move.

In conjunction with second aspect, the embodiment of the invention provides second of possible embodiments of second aspect, wherein institute It states device and acts on human body auxiliary robot, the human body auxiliary robot includes action portion;First generation module includes:

Second generation unit, for generating whole move according to the position of object of interest and the position of human body auxiliary robot Dynamic instruction.

In conjunction with second aspect, the embodiment of the invention provides the third possible embodiments of second aspect, wherein the One selecting module includes:

First selecting unit, for selecting the region of interest of user from ambient image according to the first human eye perspective data Domain;

Second selecting unit, for selecting the object being located in area-of-interest as object of interest.

In conjunction with second aspect, the embodiment of the invention provides the 4th kind of possible embodiments of second aspect, wherein the Two selecting units include:

First output subelement, if there are multiple candidate targets in area-of-interest, for will deposit in area-of-interest It is exported in multiple candidate targets;

First choice subelement selects interested for being instructed according to user for the first choice that display screen is assigned Specified candidate target in region is as object of interest.

In conjunction with second aspect, the embodiment of the invention provides the 5th kind of possible embodiments of second aspect, wherein the One output subelement is further used for: the image of amplified area-of-interest is shown on AR glasses;

First choice subelement is further used for: obtaining user and observes the second human eye perspective data caused by AR glasses; The first choice instruction is the second human eye perspective data;And according to the second human eye perspective data, select in area-of-interest Specify object as object of interest.

In conjunction with second aspect, the embodiment of the invention provides the 6th kind of possible embodiments of second aspect, wherein the Two selecting units include:

First extracts subelement, for carrying out foreground extraction to area-of-interest, to determine foreground object;

Second extracts subelement, for extracting reference picture from target database;

First operation subelement, for meeting the object of preset requirement with the similarity of reference picture in foreground object As object of interest.

In conjunction with second aspect, the embodiment of the invention provides the 7th kind of possible embodiments of second aspect, wherein also Include:

Third extracts subelement, for carrying out foreground extraction to area-of-interest, to determine foreground object;

4th extracts subelement, for extracting reference picture from target database;

Second operation subelement, for meeting the object of preset requirement with the similarity of reference picture in foreground object As candidate target.

In conjunction with second aspect, the embodiment of the invention provides the 8th kind of possible embodiments of second aspect, wherein also Include:

Second selecting module, for the second selection instruction that basis is got, the selection target data from candidate data library Library;Candidate data library includes home environment database, medical environment database, outdoor environment database.

In conjunction with second aspect, the embodiment of the invention provides the 9th kind of possible embodiments of second aspect, wherein also Include:

Second obtains module, for obtaining current location information;

First searching module, for searching location information corresponding with current location information;

Second generation module, for generating the second selection instruction according to location information.

In conjunction with second aspect, the embodiment of the invention provides the tenth kind of possible embodiments of second aspect, wherein

Second selection instruction is the database selection instruction that user is assigned.

In conjunction with second aspect, the embodiment of the invention provides a kind of the tenth possible embodiments of second aspect, wherein First choice module includes:

Third selecting unit, for selecting the first object to be confirmed from ambient image according to the first human eye perspective data;

First output unit, for exporting the prompt information of each first object to be confirmed respectively;

Operating unit, if will confirm that for getting the confirmation instruction in response to prompt information and refer to corresponding first Object to be confirmed is as object of interest.

In conjunction with second aspect, the embodiment of the invention provides the 12nd kind of possible embodiments of second aspect, wherein First choice subelement includes:

Second selection subelement, for selecting object corresponding with first choice instruction as the second object to be confirmed;

Second output subelement, for exporting the prompt information of each first object to be confirmed respectively;

Third operates subelement, if will confirm that corresponding to finger for getting the confirmation instruction in response to prompt information The first object to be confirmed as object of interest.

In conjunction with second aspect, the embodiment of the invention provides the 13rd kind of possible embodiments of second aspect, wherein First output unit is further used for showing image information corresponding with first pair of object to be confirmed on a display screen;

And/or play the voice messaging of the title of first pair of object to be confirmed;

Second output subelement is further used for showing image corresponding with second pair of object to be confirmed on a display screen Information;

And/or play the voice messaging of the title of second pair of object to be confirmed.

In conjunction with second aspect, the embodiment of the invention provides the 14th kind of possible embodiments of second aspect, wherein Further include:

Third obtains module, for obtaining user behavior;

First determining module gets and prompts for determination if user behavior meets preset criterion behavior requirement The corresponding confirmation instruction of information.

In conjunction with second aspect, the embodiment of the invention provides the 15th kind of possible embodiments of second aspect, wherein Criterion behavior requires, and user completes one or more behaviors identified below:

It blinks, open one's mouth, sticking out one's tongue, blowing, head movement, speech act, eye movement behavior.

The third aspect, the embodiment of the invention also provides a kind of non-volatile program codes that can be performed with processor Computer-readable medium, said program code make the processor execute any the method for first aspect.

Fourth aspect includes: processor, memory and bus the embodiment of the invention also provides a kind of calculating equipment, deposits Reservoir, which is stored with, to be executed instruction, and when calculating equipment operation, by bus communication between processor and memory, processor is executed Stored in memory such as any the method for first aspect.

The control method of human body auxiliary robot provided in an embodiment of the present invention is first obtained by the way of eye movement control The the first human eye perspective data and corresponding ambient image of user are taken;Then, according to the first human eye perspective data, from environment map The object of interest of user is selected as in;Robot control instruction is finally generated according to the position of object of interest.So that user Corresponding control instruction can be assigned by rotating glasses, improve the convenience used.

To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate Appended attached drawing, is described in detail below.

Detailed description of the invention

In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.

Fig. 1 shows the basic flow chart of the control method of human body auxiliary robot provided by the embodiment of the present invention;

Fig. 2 shows the schematic diagrames for observing true ambient image provided by the embodiment of the present invention through AR glasses;

Fig. 3 shows the signal of the first situation of display environment image over the display provided by the embodiment of the present invention Figure;

Fig. 4 shows the signal of the first situation of display environment image over the display provided by the embodiment of the present invention Figure;

Fig. 5 shows the schematic diagram for observing true environment image provided by the embodiment of the present invention by AR glasses;

Fig. 6 shows the schematic diagram of the first calculating equipment provided by the embodiment of the present application.

Specific embodiment

Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause This, is not intended to limit claimed invention to the detailed description of the embodiment of the present invention provided in the accompanying drawings below Range, but it is merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art are not doing Every other embodiment obtained under the premise of creative work out, shall fall within the protection scope of the present invention.

In the related technology, the group of benefiting from of human body auxiliary robot is mainly sick and wounded patient and needs to carry out auxiliary operation (in turn, the technical field of method provided herein is it can be appreciated that play the role of auxiliary operation to user by user Robot), constraint (such as the body of hemiplegic patient, paralytic patient can not move) of the sick and wounded patient by self-condition, no Facilitate and carry out certain movements (such as mobile and crawl), therefore, traditional human body auxiliary robot generally should can at least be completed Mobile and crawl task.

It is the operation for receiving user and being assigned that human body auxiliary robot, which completes crawl and the precondition of mobile movement, Instruction, it is generally the case that user is that assigning for operational order is completed by handle controller.But for certain sick and wounded patients Speech, is not convenient to use handle controller, or even will appear maloperation when using handle controller, and then causes danger Danger.

In response to this, this application provides a kind of control methods of human body auxiliary robot, as shown in Figure 1, comprising:

S101 obtains the first human eye perspective data and corresponding ambient image of user;

S102 selects the object of interest of user according to the first human eye perspective data from ambient image;

S103 generates robot control instruction according to the position of object of interest.

Wherein, the first human eye perspective data can be detected by eyeball of the eyeball tracking technology to user after obtain , the first human eye perspective data should at least reflect angle observed by the eyeball of user.The eyeball of user is detected Equipment can be infrared equipment, be also possible to general image capture device, image capture device herein can be generally Computer camera, be also possible to camera of the mobile phone either in other terminals.That is, step S101 is in specific implementation, it can Being shot by eyeball image of the image capture device to user, to obtain eye movement image;And to eye movement Image is analyzed, to determine the first human eye perspective data of user.Wherein, image capture device can be below any one Kind: eye movement sensor device, small-sized image pickup head, computer camera, the camera on intelligent terminal (mobile phone, tablet computer).Step pair Eye movement image is analyzed, to determine that the executing subject of the first human eye perspective data of user can be human body auxiliary machinery What processor on people was completed, it is also possible to complete independently of the processor on human body auxiliary robot.

Ambient image with human eye perspective data be it is associated, in general, ambient image is preferably the view according to user (shooting visual angle of ambient image and the observation visual angle of user's human eye should be essentially identical) that angle is got, ambient image can Being got by the video camera being arranged on user's head (more specifically, can be near the eye that user is arranged in) True environment image, virtual/half virtual (such as AR) image shown on a display screen for being also possible to take.Also It is that ambient image can be to get by the video camera that user's head is arranged in, is also possible to ambient image and is shown in On display screen image (image of the display on a display screen can be the true ambient image shown on a display screen, It can be the virtual ambient image shown on a display screen).When ambient image is shown in the image on display screen, also Ambient image no longer can be obtained by video camera, but can directly be transmitted from signal from display screen or to display screen It manages and obtains ambient image at device.No matter ambient image is what mode to obtain ambient image by, which all should be The image that user can be arrived by naked eyes direct viewing.

When ambient image is shown in the image on display screen, in method provided herein, preferably through pendant The eyeglass display environment image (namely display screen be AR glasses eyeglass) for the glasses (AR glasses) being worn on the head of user. It is, of course, also possible to be that come display environment image, (display is such as arranged in aobvious on human body auxiliary robot by other displays Show the display on device, or the intelligent terminal of such as mobile phone or tablet computer etc).

As shown in Fig. 2, showing the schematic diagram for observing true ambient image through AR glasses (or can be described as passing through Naked eye directly observes the image of true environment), it can see by the figure, appearing in the object in user's sight has object A, right As B and object C.As shown in figure 5, show by AR glasses observe ambient image as a result, in Fig. 5, object A is chair, Object B is desk, and object C is computer.

Other than through AR glasses/by the object in direct visual perception true environment, it can also be that system is direct Projected on the eyeglass of AR glasses, with shown on AR glasses true environment image (in the image of true environment, Ge Gejing The position of each scenery and size are identical in the position of object and size, with true environment), or show that simulation is true (in the virtual image of simulation true environment, the position of each scenery and size are that can be configured to the virtual image of environment , do not require with the position of scenery each in true environment and size to be identical).Show the virtual graph of simulation true environment When picture, after the image for needing to get true environment by way of video recording, need to know using prospect otherwise, from true Foreground object, such as desk, chair are identified in the image of environment, reuse figure corresponding to the foreground object identified later The virtual image of mark composition simulation true environment

As shown in figure 3, showing the of on display (such as tablet computer, mobile phone show equipment) display environment image A kind of situation.That is, ambient image shown over the display is also possible to by carrying out obtained by video capture to true environment True environment image (after being similar to video recording, playing video recording over the display), be also possible to show simulation true environment Virtual image.It is each in the distribution of each object shown and size and true environment in the virtual image for simulating true environment The distribution of a object and size are identical.

As shown in figure 4, showing the of on display (such as tablet computer, mobile phone show equipment) display environment image Two kinds of situations.The first situation of display environment image is compared over the display, it is evident that is no longer aobvious under second situation Show image corresponding to true environment, but lists and (list in array-like) figure of the object in true environment over the display Mark.

It is, step S101 can be executed as follows:

Camera by the way that user's head is arranged in obtains ambient image, and the ambient image is any one figure below Picture:

The image of true environment, the image of the simulation true environment of display on a display screen (simulate the image of true environment In each different objects relative size and relative positional relationship with the relative size of objects different in true environment and with respect to position The relationship of setting is identical), the image as composed by the icon of target object (image presented in Fig. 4);Wherein, target object Occur from the object in true environment.

In turn, provided herein when ambient image is shown in the image of the simulation true environment on display screen Method further include:

The image of simulation true environment is shown on a display screen.

When ambient image is the image as composed by the icon of target object, method provided herein further include:

The image as composed by the icon of target object is shown on a display screen;Target object occurs from true environment Object.At this point, ambient image appeared in step S101 can carry out obtained image of taking pictures to display screen, It is also possible to not obtain by way of taking pictures, but directly gets and show on a display screen by target object from data source Icon composed by image code corresponding to image, such system also can clearly know that specific picture material is assorted ?.

It is furthermore preferred that the icon of shown target object is in array-like arrangement on a display screen.Array-like herein refers to Be square array, circular array either other shapes array.

After getting the first human eye perspective data and ambient image, according to the first human eye perspective data it is known that User is staring at which object (desk, chair in such as ambient image) in ambient image, it is, being regarded according to the first human eye The object that the user that angular data is determined stares at is exactly the object of interest of user.

Finally, generating robot control instruction according to the position of object of interest.

Robot control instruction herein can be divided into two kinds, be the overall movement instruction of human body auxiliary robot respectively (driving human body is auxiliary for (instruction mobile towards object of interest of driving human body auxiliary robot) and the instruction of crawl object of interest Help the instruction of robot crawl object of interest).Certainly, robot control instruction herein can also refer to be moved to specified It is grabbed behind position.

When robot control instruction is to move integrally instruction, method provided herein acts on human body auxiliary Robot, the human body auxiliary robot include action portion;Step generates robot control according to the position of object of interest and refers to Order includes:

It is generated according to the position of object of interest and the position of human body auxiliary robot and moves integrally instruction.

In turn, human body auxiliary robot is after receiving overall movement instruction, so that it may pass through and drive action portion tomorrow It is moved to object of interest.Wherein, the position of object of interest and the position of human body auxiliary robot are appreciated that For coordinate value in space.There are many acquisition modes for the coordinate (position) of object of interest, only enumerate below therein several:

The first, is realized by the locator that is arranged on object of interest and radio signal senders.Specific implementation When, the locator on object of interest can be driven first to obtain position signal, and pass through radio signal senders for position Signal is sent to system (executing subject of method provided herein).It is, feeling emerging in method provided herein The position of interesting object, which can be, to be got by the locator being arranged on object of interest.

Second, the position of object of interest is obtained by the way that external positioning device is arranged, for example ultrasonic wave can be passed through The equipment such as locator, wifi locator are positioned, or further by the photo of actual environment carry out auxiliary positioning come Determine the position of object of interest.

The third, the position of certain object of interest is determining, for example user's expectation goes lavatory, or bedside is removed, The position of lavatory and bed is relatively-stationary in a room, at this point it is possible to which it is relatively fixed to prestore these positions in systems The location information of object then when in use, directly transfer these positions.In such cases, provided herein In method, the position of object of interest can be determination as follows:

The position of object of interest is searched from the object's position list being pre-stored in location database;The object's position The corresponding relationship that specified object and position are recorded in list also can be by looking into after user has determined object of interest The mode of table inquires the position of object of interest.

It is, when object be bed, lavatory these mobile object generally will not occur when, can be without using interim positioning Mode determine its position, but use pre-stored location in systems, and when in use by searching for mode come it is true Positioning is set.

After the position of object of interest has been determined, so that it may human body auxiliary robot be driven to move towards object of interest It moves.Certainly, when human body auxiliary robot is mobile, it is contemplated that the problem of avoidance, for example ultrasonic wave can be used Sensor avoids bumping against barrier.When determining robot control instruction, if necessary to determine human body auxiliary robot Position, be referred to the mode for hereinbefore obtaining the position of object of interest also to determine the position of human body auxiliary robot.

Specifically, in step S103, robot control instruction is preferably directed toward interested in order to improve mobile accuracy The navigation circuit (avoiding barrier, the route for hiding inconvenient current road) of object.

Similar, when control instruction is arm move (fetching instruction), method provided herein is made For human body auxiliary robot, the human body auxiliary robot includes action portion;Step is generated according to the position of object of interest Robot control instruction includes:

Arm move is generated according to the position of object of interest and the position of human body auxiliary machinery human arm.

Wherein, the position of human body auxiliary machinery human arm is primarily referred to as the structure that human body auxiliary robot is able to carry out crawl Position.When control instruction is arm move, the acquisition modes of the position of object of interest are referred to hereinbefore Mode does not describe excessively herein.

In method provided herein, the major function of human body auxiliary robot first is that carry user is mobile, also It is, in human body auxiliary robot according to instruction is moved integrally towards when object of interest movement, also to deliver this use simultaneously Family is mobile towards object of interest.Method provided herein is preferably applied in interior, such as preferably in the family of user, doctor The present processes are used in the relatively simple place of institute's moderate environment.As a result, in method provided herein, human body auxiliary Robot can carry user and move when being moved according to robot control instruction, rather than human body is auxiliary It helps robot to be separated from user and carries out movement.Method provided herein is it can be appreciated that be one kind in opposite envelope Close/environment it is relatively-stationary interior applied by human body auxiliary robot control method.

Method provided herein can determine that user's is interested right according to the first human eye perspective data of user As to assign the finger that the instruction moved towards object of interest either grabs object of interest to human body auxiliary robot It enables, user is allowed automatically to complete assigning a task for instruction.

When actual use, since object of interest is not necessarily institute in the sight for be evenly distributed in user May will cause, in some region, object of interest is relatively more, and in some region, it is interested right to can be used as The object of elephant is fewer, or be different object might have blocking either be overlapped the case where, at this point it is possible to according to The sight at family first determine some region as area-of-interest, then, then from be located at this region in object in select Object of interest.

It is, in method provided herein, step S102, according to the first human eye perspective data, from ambient image It is middle selection user object of interest include:

According to the first human eye perspective data, the area-of-interest of user is selected from ambient image;

Select the object being located in area-of-interest as object of interest.

Wherein, area-of-interest refers to one piece of region that user's glasses stare at, can be with when determining area-of-interest It executes as follows:

Blinkpunkt of the user in ambient image is determined according to the first human eye perspective data;

The point on the basis of blinkpunkt selects the region at a distance from blinkpunkt less than preset threshold as area-of-interest.

For example, can be drawn in ambient image using blinkpunkt as the center of circle, the circle that radius is 5 centimetres, by the area in circle Domain is as area-of-interest.It is similar, can also centered on blinkpunkt point, it is rectangular to draw one, and rectangular interior region is made For area-of-interest.

In addition to using such point on the basis of blinkpunkt, to enclose the mode for drawing area-of-interest, can also be ring first Border image is divided into preset multiple regions, then, sees which region blinkpunkt has fallen in, just using which region as interested Region, that is, when determining area-of-interest, can execute as follows:

According to the concentration of candidate target in ambient image, ambient image is divided into multiple and different candidate regions;

Blinkpunkt of the user in ambient image is determined according to the first human eye perspective data;

Using the candidate region where blinkpunkt as area-of-interest.

When use before determining blinkpunkt, ambient image is carried out to be divided into multiple candidate regions, can be reduced Influence of the position of blinkpunkt to division, and more reasonable division is carried out to ambient image.

Under normal circumstances, the more intensive region of candidate target in ambient image, the candidate region marked off should just be got over It is more, conversely, then fewer;The being positively correlated property of quantity of the quantity of candidate target and candidate region in other words.

After area-of-interest has been determined, so that it may it is emerging as sense to find specified candidate target from area-of-interest Interesting object.

Step selects the object being located in area-of-interest as object of interest, there is following four kinds of implementations:

The first selects mode of the object being located in area-of-interest as object of interest:

If only existing a candidate target in area-of-interest, the candidate target conduct in area-of-interest will be present in Object of interest.

Such executive mode is relatively easy, due to only one candidate target in area-of-interest, user is only possible to Select the candidate target as object of interest.

Mode of second of object selected in area-of-interest as object of interest:

If there are multiple candidate targets in area-of-interest, by there are the progress of multiple candidate targets is defeated in area-of-interest Out;

According to the first choice instruction that user is assigned for display screen, select the specified candidate in area-of-interest right As object of interest.

It is, it is right which candidate is system can not directly determine when having multiple candidate targets in the region of interest As can be used as object of interest, at this point, can only be that multiple candidate targets present in area-of-interest are exported, then basis The first choice that user is assigned instructs to determine which candidate target as object of interest.

Specifically, by the mode exported in area-of-interest there are multiple candidate targets there are several types of:

Area-of-interest is amplified, and amplified area-of-interest is shown in display screen;What display screen herein referred to It is if the display screen on the display screen or certain mobile terminals on human body auxiliary robot is (on mobile phone or tablet computer Display screen) display screen either on AR glasses;

Icon corresponding to the candidate target being located in area-of-interest is shown on a display screen;Display herein Screen is referred to such as display screen (mobile phone or the plate electricity on the display screen or certain mobile terminals on human body auxiliary robot Display screen on brain) display screen either on AR glasses;

Title corresponding to the candidate target in area-of-interest will be located at and carry out voice broadcasting (it is, if candidate Object is bed, then system can by directly by voice in a manner of play the voice of " bed ").

Corresponding, the first choice instruction that user is assigned also has several specific forms, and first choice instruction can be Phonetic order can be the instruction assigned by remote controler (such as handle type remote controler), can also be and is regarded by the second human eye Instruction that angular data is assigned (such as by the icon of amplified area-of-interest or candidate target show over the display it Afterwards, it can be stared at by user and where see and (determine that user stares at by the second human eye perspective data where to see), to determine which is waited Select object as object of interest, in turn, the second human eye perspective data is to reaction user and is staring at what some object was seen Data).

It is, step is by there are the progress of multiple candidate targets is defeated in area-of-interest in certain preferred embodiment Include: out

By the image of amplified area-of-interest in AR glasses/display screen display;

Step is instructed according to user for the first choice that display screen is assigned, and selects the specified time in area-of-interest The object is selected to include: as object of interest

It obtains user and observes the second human eye perspective data caused by AR glasses/display screen;The first choice instructs Second human eye perspective data;

According to the second human eye perspective data, select the specified object in area-of-interest as object of interest.

The third selects mode of the object being located in area-of-interest as object of interest:

Obtain the historical operation habit of user;

It is accustomed to according to historical operation, determines the candidate target for being located at and specifying in area-of-interest as object of interest.

Wherein, historical operation habit generally includes two kinds of behavioural habits, is content of the act and behavior according to user respectively The first behavioural habits that the corresponding relationship of time of origin determines, and the second behavior counted according to the tandem of user behavior Habit.

In general, the habit for the time that the behavior of first behavior tcs response user is occurred or the first behavior are practised It is used reacted user in different times in the habit of behavior that is occurred.For example, user often/must in the morning 8 points or so Go lavatory, user often/must can be mobile etc. to bed in or so 12 thirty;The either time difference mobile to bed of user It is 12 thirty and 21 points of the two times;These three habits all can serve as the first behavioural habits.First behavioural habits It is usually counted according to a large amount of historical behavior, it is good to be also possible to preparatory typing.Determined the first behavioural habits it Afterwards, as long as determining current time, so that it may know that user is more desirable and go at which candidate target.

In general, the second behavioural habits have reacted the sequencing between the different behaviors of user.For example, user is going lavatory It later would generally be mobile towards chair;For another example, user is after going to kitchen, it will usually mobile towards dining table.Due to the second row For the sequencing between tcs response user's difference behavior, therefore, after the upper behavior of user has been determined, so that it may More accurately to determine at the more desirable next object gone of user which is.

In turn, when specific execute, the first behavioural habits can be only used to determine object of interest, it can also be only Object of interest is determined using the second behavioural habits, can also be while being come using the first behavioural habits and the second behavioural habits Determine object of interest.

Specifically, step is accustomed to according to historical operation, determine the candidate target for being located at and specifying in area-of-interest as sense Object of interest can execute as follows:

It is accustomed to (the first behavioural habits and/or the second behavioural habits) according to historical operation, calculates and be each located at region of interest The reference value of candidate target in domain;

Select the highest candidate target of reference value as object of interest, or selection reference value is more than the candidate of default value Object is as object of interest.

4th kind selects mode of the object being located in area-of-interest as object of interest:

Foreground extraction is carried out to area-of-interest, to determine foreground object;

Reference picture is extracted from target database;

By in foreground object, meet the object of preset requirement as object of interest with the similarity of reference picture.

Wherein, foreground object is that the obtained foreground image of foreground extraction or foreground image are carried out to area-of-interest In a part, if foreground image is divided into muti-piece (disjunct muti-piece), each piece all can serve as a foreground image, Certainly, determine that the mode of foreground object can also be that the reference picture in the image and target database that will be extracted carries out pair Than, and foreground object appeared in area-of-interest is determined according to the case where reference picture.

Reference picture is to be pre-stored in (can be stored with according to the user's choice in target database in target database Desk chair and other items), in order to guarantee the accuracy calculated, it may be that deposited in target database for each candidate target The reference picture for containing multiple and different visual angles can be by the reference picture at each visual angle and foreground object when calculating Similarity calculation is carried out, and using the maximum value of similarity as the similarity of the foreground object.

It is compared with first three, the 4th kind of mode possibly can not determine unique object of interest.Namely area-of-interest In still there may be multiple object of interest, but the use of the 4th kind of mode is that cannot be distinguished from out which object of interest is User is really interested.But the 4th kind of mode is compared with first three mode, still has its advantage, mainly object of reference can be more Add and accurately determines and (compared using the reference picture prestored in the database, to improve determining order of accuarcy), in turn, 4th kind of mode can be combined with first three mode.It is, in method provided herein, candidate target can be with It determines as follows:

Foreground extraction is carried out to area-of-interest, to determine foreground object;

Reference picture is extracted from target database;

By in foreground object, meet the object of preset requirement as candidate target with the similarity of reference picture.

Wherein, foreground object is the foreground image of area-of-interest, if to be divided into muti-piece (disjunct more for foreground image Block), then it all can serve as a foreground object for each piece, certainly, determining that the mode of foreground object can also be will extract Image is compared with the reference picture in target database, and determines in area-of-interest according to the case where reference picture Existing foreground object.

The object for meeting preset requirement with the similarity of reference picture, it is highest right with the similarity of reference picture to refer to As.Step by foreground object, specifically holding as candidate target by the object for meeting preset requirement with the similarity of reference picture When row, the similarity for first calculating reference picture corresponding to each foreground object can be (corresponding to calculating foreground object Multiple similarities), select similarity maximum value as the similarity of the foreground object later.And then by similarity highest Foreground object as candidate target.

Target database has been arrived in use in above-mentioned segmentation scheme, it is, target database can determine reference picture, because This, different target database can help accurately to determine different objects.Situation about being applicable according to this programme, inventor recognize For that database can be divided into following several types:

Home environment database, medical environment database, outdoor environment database.

Wherein, the reference picture stored in home environment database mainly there are several types of:

Desk image, chair image, lavatory image.

The reference picture stored in medical environment database mainly there are several types of:

Image, hospital bed image, the lavatory image of each department.

The reference picture stored in outdoor environment database mainly there are several types of:

The images of neighbouring each building, main businessman image.

Corresponding, method provided herein further includes following steps:

According to the second selection instruction got, the selection target database from candidate data library;Candidate data library includes Home environment database, medical environment database, outdoor environment database.

In turn, when the reference picture that target database has been determined and then has been extracted from target database, so that it may To extract corresponding reference picture, in turn, can more accurately complete to identify using corresponding reference picture.

Wherein, the second selection instruction can be what user (using the operator of human body auxiliary robot) was assigned, can also Being assigned by third party user, it can also be that system is generated in response to external environment.

It is, when the second selection instruction is that system is generated in response to external environment, then side provided herein Method further include:

Obtain current location information;

Search location information corresponding with current location information;

The second selection instruction is generated according to location information.

Wherein, current location information reflects the position that human body auxiliary robot is presently in, and then, can use electronics Cartographic Technique searches location information corresponding with current location information, and location information can be hospital, family, park etc. ground Point.Later, the second selection instruction, if location information is hospital, the second selection instruction for generating are generated according to location information It then can be for selecting medical environment database;If location information is family, the second selection instruction generated then can be with It is for selecting home database.

It is corresponding, when the second selection instruction is that user is assigned, then method provided herein further include:

Receive the database selection instruction that user is assigned.

The database selection instruction that the user is assigned can be user and be assigned by the key on operation hand-held remote controller , it is also possible to through acoustic control instruction issuing.

In scheme provided herein, for user when selecting object of interest, system can automatically help user Selected, such as hereinbefore shown in situation, when in some region of ambient image there are when excessive object, system Possibly accurately can not determine user on earth is which object seen, at this point, system can take the form of partial enlargement to help User is helped to confirm.Other than this mode, user can also be helped to confirm by way of confirming with user, For example system determines user in observation a-quadrant according to the first human eye perspective data, there are three objects altogether in a-quadrant at present, divide It is not desk, vase and tablecloth, at this point, system can confirm with user, to determine user is specifically which object seen.

It is, step S102 can be executed as follows in method provided herein:

Step 1021, according to the first human eye perspective data, the first object to be confirmed is selected from ambient image;

Step 1022, the prompt information of each first object to be confirmed is exported respectively;

Step 1023, if getting the confirmation instruction in response to prompt information, it will confirm that and refer to corresponding first to true Object is recognized as object of interest.

Wherein, object to be confirmed can be understood as pair that may be seen under the visual angle corresponding to the first human eye perspective data As, for example under some visual angle, user may can see desk and bed.In step 1022, the prompt information of output It is information corresponding to desk and bed.It should be noted that in step 1022, the form of output can be it is diversified, For example, can export in a manner of image information, it is also possible to export in a manner of voice.It is exported in a manner of image information, Specifically it can be the text or figure for showing desk and bed on a display screen, to allow user to assign confirmation instruction.Believed with voice The mode of breath exports, then can be the title that system plays bed and desk automatically, to allow user to assign confirmation instruction.That is, step Exporting each prompt information corresponding with the first object to be confirmed respectively includes:

Display image information corresponding with first pair of object to be confirmed on a display screen;And/or it plays first and treats really Recognize the voice messaging of the title of object.

More specifically, when exporting prompt information, it can be system and multiple objects be simultaneously displayed on display simultaneously On screen, such as the text of bed and desk is simultaneously displayed on display screen, is also possible to display that system recycles on a display screen not Same object, such as 1-5,11-15 seconds display desks, 6-10,16-20 seconds display beds, then user only needs by next true Recognize button, when system can be according to ACK button be pressed, currently displayed object is determine the desired selection of user Which.Such as the ACK button that user pressed in the 8th second (period of display bed), then system is it is determined that user is the phase Prestige selects bed as object of interest.

Corresponding, step is instructed according to user for the first choice that display screen is assigned, and is selected in area-of-interest Specified object can be realized as follows as object of interest:

Select object corresponding with first choice instruction as the second object to be confirmed;

Each prompt information corresponding with the second object to be confirmed is exported respectively;

If confirmation instruction corresponding with prompt information is got, using corresponding second object to be confirmed as interested Object.

Wherein, first choice instruction is that user is assigned for display screen, has assigned first choice in user and has instructed it Afterwards, system can determine corresponding second object to be confirmed.Step output prompt letter corresponding with the second object to be confirmed Breath, identical as the implementation of step 1022, step is using corresponding second object to be confirmed as object of interest and step 1023 is identical, is not repeated to illustrate herein.

Similar, step exports each prompt information corresponding with the second object to be confirmed respectively and includes:

Display image information corresponding with second pair of object to be confirmed on a display screen;

And/or play the voice messaging of the title of second pair of object to be confirmed.

After the prompt information for exporting the first object to be confirmed or the second object to be confirmed, user can pass through a variety of differences Mode confirmed, for example can be confirmed by voice, can be confirmed by eye movement.

In turn, in scheme provided herein, after step exports prompt information corresponding with object to be confirmed, also Include:

Obtain user behavior;

If user behavior meets preset criterion behavior requirement, it is determined that get confirmation corresponding with prompt information and refer to It enables.

Wherein, criterion behavior requires to include: user completes one or more behaviors identified below:

It blinks, open one's mouth, sticking out one's tongue, blowing, head movement, speech act, eye movement behavior.

In the volume of specific implementation, criterion behavior requirement refers to that user is completed at the same time a behavior identified below:

It blinks, open one's mouth, sticking out one's tongue, blowing, head movement, speech act, eye movement behavior;

Either it is completed at the same time at least two behavior identified below:

It blinks, open one's mouth, sticking out one's tongue, blowing, head movement, speech act, eye movement behavior.

It should be noted that being typically required pair of confirmation when needing user to be completed at the same time at least two behaviors As more situation, certain prompt can be carried out to user at this time.Specifically, may be used also while executing step 1022 To execute following steps:

Output criterion behavior corresponding with each first object to be confirmed respectively requires.

Wherein, the criterion behavior requirement exported is exactly the movement for needing user to make, for example, showing on a display screen While " bed " (one kind of prompt information), also shows blink and stick out one's tongue, this means that user must blink and loll simultaneously Head can just be thought that user's expectation selects bed as object of interest by system.

It corresponds to the above method, present invention also provides a kind of control devices of human body auxiliary robot, comprising:

First obtains module, for obtaining the first human eye perspective data and corresponding ambient image of user;

First choice module, for according to the first human eye perspective data, selection user's to be interested right from ambient image As;

First generation module, for generating robot control instruction according to the position of object of interest.

Preferably, described device acts on human body auxiliary robot, and the human body auxiliary robot includes arm;

First generation module includes:

First generation unit, for generating hand according to the position of object of interest and the position of human body auxiliary machinery human arm Arm move.

Preferably, described device acts on human body auxiliary robot, and the human body auxiliary robot includes action portion;First Generation module includes:

Second generation unit, for generating whole move according to the position of object of interest and the position of human body auxiliary robot Dynamic instruction.

Preferably, first choice module includes:

First selecting unit, for selecting the region of interest of user from ambient image according to the first human eye perspective data Domain;

Second selecting unit, for selecting the object being located in area-of-interest as object of interest.

Preferably, the second selecting unit includes:

First output subelement, if there are multiple candidate targets in area-of-interest, for will deposit in area-of-interest It is exported in multiple candidate targets;

First choice subelement selects interested for being instructed according to user for the first choice that display screen is assigned Specified candidate target in region is as object of interest.

Preferably, the first output subelement is further used for: by the image of amplified area-of-interest on AR glasses Display;

First choice subelement is further used for: obtaining user and observes the second human eye perspective data caused by AR glasses; The first choice instruction is the second human eye perspective data;And according to the second human eye perspective data, select in area-of-interest Specify object as object of interest.

Preferably, the second selecting unit includes:

First extracts subelement, for carrying out foreground extraction to area-of-interest, to determine foreground object;

Second extracts subelement, for extracting reference picture from target database;

First operation subelement, for meeting the object of preset requirement with the similarity of reference picture in foreground object As object of interest.

Preferably, further includes:

Third extracts subelement, for carrying out foreground extraction to area-of-interest, to determine foreground object;

4th extracts subelement, for extracting reference picture from target database;

Second operation subelement, for meeting the object of preset requirement with the similarity of reference picture in foreground object As candidate target.

Preferably, further includes:

Second selecting module, for the second selection instruction that basis is got, the selection target data from candidate data library Library;Candidate data library includes home environment database, medical environment database, outdoor environment database.

Preferably, further includes:

Second obtains module, for obtaining current location information;

First searching module, for searching location information corresponding with current location information;

Second generation module, for generating the second selection instruction according to location information.

Preferably, the second selection instruction is the database selection instruction that user is assigned.

Preferably, first choice module includes:

Third selecting unit, for selecting the first object to be confirmed from ambient image according to the first human eye perspective data;

First output unit, for exporting each prompt information corresponding with the first object to be confirmed respectively;

Operating unit, if for getting confirmation instruction corresponding with prompt information, it is to be confirmed by corresponding first Object is as object of interest.

Preferably, first choice subelement includes:

Second selection subelement, for selecting object corresponding with first choice instruction as the second object to be confirmed;

Second output subelement, for exporting each prompt information corresponding with the second object to be confirmed respectively;

Third operates subelement, if for getting confirmation instruction corresponding with prompt information, by corresponding second Object to be confirmed is as object of interest.

Preferably, the first output unit is further used for showing on a display screen corresponding with first pair of object to be confirmed Image information;

And/or play the voice messaging of the title of first pair of object to be confirmed;

Second output subelement is further used for showing image corresponding with second pair of object to be confirmed on a display screen Information;

And/or play the voice messaging of the title of second pair of object to be confirmed.

Preferably, further includes:

Third obtains module, for obtaining user behavior;

First determining module gets and prompts for determination if user behavior meets preset criterion behavior requirement The corresponding confirmation instruction of information.

Preferably, criterion behavior requires to be that user completes one or more behaviors identified below:

It blinks, open one's mouth, sticking out one's tongue, blowing, head movement, speech act, eye movement behavior.

Method provided herein additionally provides a kind of meter of non-volatile program code that can be performed with processor Calculation machine readable medium, said program code make the processor execute the control method of the human body auxiliary robot.

As shown in fig. 6, equipment schematic diagram is calculated for provided by the embodiment of the present application first, the first calculating equipment 1000 It include: processor 1001, memory 1002 and bus 1003, memory 1002, which is stored with, to be executed instruction, when the first calculating equipment It when operation, is communicated between processor 1001 and memory 1002 by bus 1003, processor 1001 executes in memory 1002 The step of determining method such as signal lamp cycle of storage.

It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.

The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.