patents.google.com

CN103095813A - Voice interaction system, mobile terminal device and voice communication method - Google Patents

  • ️Wed May 08 2013

CN103095813A - Voice interaction system, mobile terminal device and voice communication method - Google Patents

Voice interaction system, mobile terminal device and voice communication method Download PDF

Info

Publication number
CN103095813A
CN103095813A CN201210592490.XA CN201210592490A CN103095813A CN 103095813 A CN103095813 A CN 103095813A CN 201210592490 A CN201210592490 A CN 201210592490A CN 103095813 A CN103095813 A CN 103095813A Authority
CN
China
Prior art keywords
communication
voice
mobile terminal
voice signal
cloud server
Prior art date
2012-12-31
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210592490.XA
Other languages
Chinese (zh)
Inventor
张国峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Via Technologies Inc
Original Assignee
Via Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
2012-12-31
Filing date
2012-12-31
Publication date
2013-05-08
2012-12-31 Application filed by Via Technologies Inc filed Critical Via Technologies Inc
2012-12-31 Priority to CN201210592490.XA priority Critical patent/CN103095813A/en
2013-05-08 Publication of CN103095813A publication Critical patent/CN103095813A/en
2013-05-17 Priority to CN201310182848.6A priority patent/CN103281466B/en
2013-06-19 Priority to TW102121754A priority patent/TWI497408B/en
2013-06-21 Priority to US13/923,383 priority patent/US8934886B2/en
Status Pending legal-status Critical Current

Links

  • 238000004891 communication Methods 0.000 title claims abstract description 177
  • 238000000034 method Methods 0.000 title claims abstract description 44
  • 230000003993 interaction Effects 0.000 title claims abstract 11
  • 238000012545 processing Methods 0.000 claims abstract description 97
  • 230000002452 interceptive effect Effects 0.000 claims description 18
  • 230000009471 action Effects 0.000 claims description 11
  • 238000003860 storage Methods 0.000 claims description 5
  • 230000006870 function Effects 0.000 description 45
  • 230000003321 amplification Effects 0.000 description 23
  • 238000003199 nucleic acid amplification method Methods 0.000 description 23
  • 230000000875 corresponding effect Effects 0.000 description 20
  • 230000004044 response Effects 0.000 description 16
  • 230000008054 signal transmission Effects 0.000 description 14
  • 238000013461 design Methods 0.000 description 10
  • 238000007600 charging Methods 0.000 description 9
  • 230000008569 process Effects 0.000 description 9
  • 230000005236 sound signal Effects 0.000 description 8
  • 238000005516 engineering process Methods 0.000 description 7
  • 238000009434 installation Methods 0.000 description 7
  • 230000005540 biological transmission Effects 0.000 description 5
  • 230000011218 segmentation Effects 0.000 description 5
  • 230000001960 triggered effect Effects 0.000 description 5
  • 230000015572 biosynthetic process Effects 0.000 description 4
  • 230000014509 gene expression Effects 0.000 description 4
  • 238000003786 synthesis reaction Methods 0.000 description 4
  • 230000008859 change Effects 0.000 description 3
  • 239000011521 glass Substances 0.000 description 3
  • 238000010411 cooking Methods 0.000 description 2
  • 230000008878 coupling Effects 0.000 description 2
  • 238000010168 coupling process Methods 0.000 description 2
  • 238000005859 coupling reaction Methods 0.000 description 2
  • 238000011161 development Methods 0.000 description 2
  • 230000018109 developmental process Effects 0.000 description 2
  • 238000010586 diagram Methods 0.000 description 2
  • 239000004973 liquid crystal related substance Substances 0.000 description 2
  • 238000003825 pressing Methods 0.000 description 2
  • 230000006978 adaptation Effects 0.000 description 1
  • 238000004458 analytical method Methods 0.000 description 1
  • 230000008901 benefit Effects 0.000 description 1
  • 238000006243 chemical reaction Methods 0.000 description 1
  • 230000000295 complement effect Effects 0.000 description 1
  • 238000004590 computer program Methods 0.000 description 1
  • 238000012790 confirmation Methods 0.000 description 1
  • 230000001276 controlling effect Effects 0.000 description 1
  • 238000000151 deposition Methods 0.000 description 1
  • 230000014759 maintenance of location Effects 0.000 description 1
  • 238000010295 mobile communication Methods 0.000 description 1
  • 238000005070 sampling Methods 0.000 description 1
  • 238000001228 spectrum Methods 0.000 description 1
  • 238000012546 transfer Methods 0.000 description 1

Images

Landscapes

  • Mobile Radio Communication Systems (AREA)
  • Telephone Function (AREA)
  • Telephonic Communication Services (AREA)

Abstract

A voice interaction system, a mobile terminal device and a voice communication method are provided. The mobile terminal device is suitable for communicating with a cloud server and comprises a voice system, a communication module and a processing unit, wherein the processing unit is coupled with the communication module and the voice system. The communication module transmits the first voice signal to the cloud server, and the cloud server analyzes a communication target and a communication instruction according to the first voice signal. The processing unit receives the communication target and searches an address book located in the mobile terminal device according to the communication target to obtain a selection list conforming to the communication target. When the voice system receives the second voice signal, the communication module simultaneously transmits the second voice signal and the selection list to the cloud server to generate a selection target. The processing unit receives and executes the communication instruction and the selected target.

Description

The method of voice interactive system, mobile terminal apparatus and voice communication

Technical field

The present invention relates to a kind of technology of speech control, and be particularly related to a kind of method of voice interactive system, mobile terminal apparatus and voice communication.

Background technology

Along with the development of science and technology, the mobile terminal apparatus with voice system is day by day universal.Above-mentioned voice system is by the speech understanding technology, allows user and mobile terminal apparatus link up.For instance, the user is as long as tell a certain requirement to above-mentioned mobile terminal apparatus, and such as wanting to look into train number, look into weather or want to call etc., system just can according to user's voice signal, take corresponding action.Above-mentioned action may be answer user's problem or move according to the system that user's instruction goes to order about mobile terminal apparatus with voice mode.

Yet, in the technical development process of voice system, but face some problems and need to be resolved hurrily.Such as: voice are in conjunction with the Information Security of cloud server, the problems such as convenience that voice system starts.

With the Information Security of voice in conjunction with cloud server, be with the concept of voice interactive system in conjunction with the high in the clouds technology at present, with complicated and need the speech processes process of powerful operational capability support to transfer to cloud server to carry out.Mode although it is so can significantly reduce the cost of the required configure hardware of mobile terminal apparatus.But, converse, pass the action such as news in brief for needs by address list, due to need by upload address list to the cloud server looking for conversation or to pass the object of news in brief, so maintaining secrecy of address list will be an important subject under discussion.Although cloud server can adopt the encryption line, and the mode of taking instant biography, not preserving, be difficult to eliminate the user to the worry of the above-mentioned practice.

On the other hand, with the convenience that voice system starts, be mostly at present that its shown application program of screen of triggering mobile terminals device starts, perhaps start by the set physical button of mobile terminal apparatus.Above-mentioned design all must start by mobile terminal apparatus itself, but in some occasion, above-mentioned design is but suitable inconvenience.Such as: during the road, and mobile terminal apparatus is placed in pocket or handbag, when perhaps cooking in the kitchen, need to dial the mobile phone that is positioned at the parlor, can't touch immediately mobile terminal apparatus with users such as inquiry friend recipe details, but the situation that voice system is opened.

In addition, the sound amplification function in mobile terminal apparatus equally also has similar problem.Although the user can pass through the finger manipulation mobile phone at present, or hold mobile phone mobile phone is pressed close to ear to start sound amplification function with one hand.But when the user can't touch mobile terminal apparatus immediately, but need make sound amplification function the time, the design that needs at present start by mobile terminal apparatus itself will cause user's inconvenience.

Therefore, how to improve these above-mentioned shortcomings, become the subject under discussion that needs to be resolved hurrily.

Summary of the invention

The invention provides a kind of method of voice interactive system, mobile terminal apparatus and voice communication, voice service can be provided more quickly.

The present invention proposes a kind of voice interactive system, and this voice interactive system comprises a mobile terminal apparatus and a cloud server.Above-mentioned mobile terminal apparatus comprises a voice system, a communication module and a processing unit.Above-mentioned voice system receives respectively the first voice signal and the second voice signal.Above-mentioned communication module transmits respectively the first voice signal and the second voice signal.Above-mentioned processing unit couples communication module and voice system.Communication module transmits the first voice signal to cloud server, and cloud server parses communication target and communication instruction according to the first voice signal.Processing unit received communication target, and be positioned at an address list of mobile terminal apparatus according to the communication target search, to obtain to meet a selective listing of communication target.When voice system receives the second voice signal, by communication module transmit simultaneously the second voice signal and selective listing to cloud server to produce a select target.Processing unit receives and executive communication instruction and select target.

A kind of mobile terminal apparatus of the another proposition of the present invention, a suitable and cloud server is linked up, and this mobile terminal apparatus comprises a voice system, a communication module and a processing unit, and above-mentioned processing unit couples communication module and voice system.Communication module transmits the first voice signal to cloud server, and cloud server parses communication target and communication instruction according to the first voice signal.Processing unit received communication target, and be positioned at an address list of mobile terminal apparatus according to the communication target search, to obtain to meet a selective listing of communication target.When voice system receives the second voice signal, by communication module transmit simultaneously the second voice signal and selective listing to cloud server to produce a select target.Processing unit receives and executive communication instruction and select target.

The present invention proposes a kind of method of voice communication, is used for a mobile terminal apparatus.The method system first receives one first voice signal, and transmits this first voice signal to a cloud server.Then, receive from cloud server a communication target that parses from the first voice signal.Then, according to the address list in communication target search mobile terminal apparatus, to obtain to meet a selective listing of communication target.Afterwards, receive one second voice signal, transmit simultaneously the second voice signal and selective listing to cloud server.Receive and carry out a communication instruction and a select target from cloud server.。

Based on above-mentioned, the present invention improves the quality of voice service by simultaneously selective listing and corresponding selection being sent to the mode of cloud server.

For above-mentioned feature and advantage of the present invention can be become apparent, embodiment cited below particularly, and coordinate accompanying drawing to be described in detail below.

Description of drawings

Fig. 1 is the calcspar of the speech control system that illustrates according to one embodiment of the invention.

Fig. 2 is the calcspar of the speech control system that illustrates according to another embodiment of the present invention.

Fig. 3 is the flow chart of the speech control method that illustrates according to one embodiment of the invention.

Fig. 4 is the calcspar according to the voice interactive system of one embodiment of the invention.

Fig. 5 is the schematic diagram according to the voice communication flow process that is used for voice interactive system of one embodiment of the invention.

Fig. 6 is the system schematic according to the mobile terminal apparatus of one embodiment of the invention.

Fig. 7 is the flow chart according to the automatic starting method of the conversation sound amplification function of the mobile terminal apparatus of one embodiment of the invention.

[main element symbol description]

100,200: speech control system

110: auxiliary actuating apparatus

112,122: wireless transport module

114: trigger module

116: wireless charging battery

1162: battery unit

1164: wireless charging module

120,220,420: mobile terminal apparatus

121,426: voice system

124,610: the speech sample module

126: voice synthetic module

127: the voice output interface

128,424: communication module

130,410:(high in the clouds) server

132: the speech understanding module

1322: voice identification module

1324: speech processing module

400: voice interactive system

412,422,660: processing unit

414: communication module

428: memory cell

429: address list

330: display unit

620: input unit

630: pull and connect the unit

640: receiver

650: public address equipment

670: earphone

S302 ~ S312, S501 ~ S519, S710 ~ S770: step

DRC: conversation receive data

DTC: conversation transmits data

SAI: input audio signal

SAO: output audio signal

SIO: input operation signal

Embodiment

Although mobile terminal apparatus now can provide voice system, link up with mobile terminal apparatus to allow the user send voice, the user still must start by mobile terminal apparatus itself when starting this voice system.Therefore can't touch immediately mobile terminal apparatus the user, but the situation that voice system is opened often can't satisfy user's demand immediately.For this reason, the present invention proposes device and the corresponding method thereof that a kind of assistant voice system opens, and allows user opening voice system more easily.In order to make content of the present invention more clear, below the example that really can implement according to this as the present invention especially exemplified by embodiment.

Fig. 1 is the calcspar of the speech control system that illustrates according to one embodiment of the invention.Please refer to Fig. 1, speech control system 100 comprises

auxiliary actuating apparatus

110,

mobile terminal apparatus

120 and server 130.In the present embodiment,

auxiliary actuating apparatus

110 can start the voice system of

mobile terminal apparatus

120 by wireless signal transmission, makes

mobile terminal apparatus

120 link up according to voice signal and

server

130.

Specifically,

auxiliary actuating apparatus

110 comprises the first

wireless transport module

112 and

trigger module

114, and wherein

trigger module

114 is coupled to the first wireless transport module 112.The first

wireless transport module

112 is for example to support wireless compatible authentication (Wireless delity, Wi-Fi), global intercommunication microwave access (Worldwide Interoperability for Microwave Access, WiMAX), bluetooth (Bluetooth), ultra broadband (ultra-wideband, UWB) or radio-frequency (RF) identification (Radio-frequencyidentification, the device of communication protocol such as RFID), it can send wireless signal transmission, to correspond to each other and to set up wireless link with another wireless transport

module.Trigger module

114 is such as being button, button etc.In the present embodiment, after the user presses these

trigger module

114 generation one triggering signals, the first

wireless transport module

112 receives this triggering signal and starts, this moment, the first

wireless transport module

112 can send wireless signal transmission, and transmitted this wireless signal transmission to

mobile terminal apparatus

120 by the first wireless transport module 112.In one embodiment, above-mentioned

auxiliary actuating apparatus

110 can be a bluetooth earphone.

Although it should be noted that the earphone/of some hand-free also has the design that starts

mobile terminal apparatus

120 some function at present, in another embodiment of the present invention, auxiliary actuating

apparatus

110 can be different from above-mentioned earphone/.Above-mentioned earphone/by with the line of mobile terminal apparatus, listen/converse to replace the earphone/ on

mobile terminal apparatus

120, start-up performance is additional design, but the application's

auxiliary actuating apparatus

110 " only " is used for opening the voice system of

mobile terminal apparatus

120, do not have the function of listening/conversing, therefore inner circuit design can be simplified, cost is also lower.In other words, for above-mentioned hands-free headsets/microphone, auxiliary actuating

apparatus

110 is other devices, and namely the user may possess earphone/and the application's of hand-free

auxiliary actuating apparatus

110 simultaneously.

In addition, the body of above-mentioned auxiliary actuating apparatus 110 can be the articles for use that the user can reach conveniently, ornaments such as ring, wrist-watch, earrings, necklace, glasses, be various Portable article, or installation component, for example for being disposed at the driving accessory on steering wheel, be not limited to above-mentioned.That is to say, auxiliary actuating apparatus 110 is the device of " life-stylize ", by the setting of built-in system, allows the user can touch easily trigger module 114, with the opening voice system.For instance, when the body of auxiliary actuating apparatus 110 was ring, user's moveable finger trigger module 114 of pressing ring easily was triggered it.On the other hand, the body when auxiliary actuating apparatus 110 is that when being disposed at the device of driving accessory, the user also can trigger the trigger module 114 of driving accessory device during the road easily.In addition, compared to the discomfort of wearing earphone/and listening/converse, use the application's auxiliary actuating apparatus 110 voice system in mobile terminal apparatus 120 can be opened, even and then open sound amplification function (then will describe in detail), make the user need not wear earphone/, still can directly listen/converse by mobile terminal apparatus 120.In addition, for the user, the article of auxiliary actuating apparatus 110 for originally wearing or use of these " life-stylizes " therefore do not have in the use the uncomfortable or problem of discomfort, namely do not need the adaptation of taking time.For instance, when the user cooks in the kitchen, in the time of need to dialing the mobile phone that is positioned over the parlor, suppose its wear have ring, the auxiliary actuating apparatus of the present invention 110 of necklace or wrist-watch body, just can touch ring, necklace or wrist-watch with the opening voice system with inquiry friend recipe details.Also can reach above-mentioned purpose although partly have at present the earphone/of start-up performance, but in the process of at every turn cooking, be not all to need to call to consult the friend at every turn, therefore for the user, wear at any time earphone/and cook, can say suitable inconvenience in order to controlling mobile terminal apparatus at any time.

In other embodiments,

auxiliary actuating apparatus

110 also may be configured with

wireless charging battery

116, in order to drive the first wireless transport module 112.Furthermore,

wireless charging battery

116 comprises

battery unit

1162 and

wireless charging module

1164, and wherein

wireless charging module

1164 is coupled to battery unit 1162.At this,

wireless charging module

1164 can receive the energy of supplying from a wireless power supply (not illustrating), and is that electric power comes

battery unit

1162 chargings with this power conversion.Thus, the first

wireless transport module

112 of

auxiliary actuating apparatus

110 can charge by

wireless charging battery

116 expediently.

On the other hand, mobile

terminal apparatus

120 is for example mobile phone (Cellphone), personal digital assistant (Personal Digital Assistant, PDA) mobile phone, smart mobile phone (Smart phone), or palmtop computer (Pocket PC), Tablet PC (Tablet PC) or mobile computer of communication software etc. are installed.Mobile

terminal apparatus

120 can be any portable (Portable) mobile device that possesses communication function, does not limit its scope at this.In addition, mobile

terminal apparatus

120 can use Android operating system, microsoft operating system, Android operating system, (SuSE) Linux OS etc., is not limited to above-mentioned.

Mobile

terminal apparatus

120 comprises the second

wireless transport module

122, the second

wireless transport module

122 can be complementary with the first

wireless transport module

112 of

auxiliary actuating apparatus

110, and adopt corresponding wireless communication protocol (communication protocols such as wireless compatible authentication, global intercommunication microwave access, bluetooth, ultra-wideband communication protocol or radio-frequency (RF) identification), use with the first

wireless transport module

112 and set up wireless link.It should be noted that " first " described herein

wireless transport module

112, " second "

wireless transport module

122 are disposed at different devices in order to wireless transport module to be described, is not to limit the present invention.

In other embodiments, mobile

terminal apparatus

120 also comprises

voice system

121, this

voice system

121 is coupled to the second

wireless transport module

122, therefore after the user triggers the

trigger module

114 of

auxiliary actuating apparatus

110, can wirelessly start

voice system

121 by the first

wireless transport module

112 and the second wireless transport module 122.In one embodiment, this

voice system

121 can comprise

speech sample module

124, voice

synthetic module

126 and voice output interface 127.

Speech sample module

124 is in order to receiving the voice signal from the user, and this

speech sample module

124 is such as the device that is the audio reception such as microphone (Microphone).Voice

synthetic module

126 can be inquired about a speech database for speech synthesis, and this speech database for speech synthesis be for example record word with and the information of corresponding voice, make voice

synthetic module

126 can find out voice corresponding to the specific character message, so that message language is carried out phonetic synthesis.Afterwards, voice

synthetic module

126 can with synthetic voice by 127 outputs of voice output interface, be used to play and give the user.Above-mentioned

voice output interface

127 is such as being loudspeaker or earphone etc.

In addition, mobile

terminal apparatus

120 also may be configured with communication module 128.

Communication module

128 is for example can transmit and the element that receives wireless signal, as radio-frequency (RF) transceiver.Furthermore,

communication module

128 can allow the user answer or call or use by mobile

terminal apparatus

120 other services that telecommunication operator provides.In the present embodiment,

communication module

128 can receive response message from

server

130 by the Internet, and set up conversation line between mobile

terminal apparatus

120 and at least one electronic installation according to this response message, wherein said electronic installation for example is another mobile terminal apparatus (not illustrating).

Server

130 is such as being the webserver or cloud server etc., and it has speech understanding module 132.In the present embodiment,

speech understanding module

132 comprises

voice identification module

1322 and

speech processing module

1324, and wherein

speech processing module

1324 is coupled to voice identification module 1322.At this,

voice identification module

1322 can receive the voice signal that transmits from

speech sample module

124, voice signal is converted to a plurality of segmentations semantic (such as vocabulary or words and expressions etc.).1324 of speech processing module can parse according to these segmentations semantemes mean (such as intention, time, place etc.) of the semantic representatives of these segmentations, and then judge the meaning represented in above-mentioned voice signal.In addition,

speech processing module

1324 also can produce corresponding response message according to the result of resolving.In the present embodiment,

speech understanding module

132 can be come implementation by the hardware circuit that or several gates combine, and can be also to come implementation with computer program code.It is worth mentioning that, in another embodiment,

speech understanding module

132 is configurable in mobile

terminal apparatus

220,

speech control system

200 as shown in Figure 2.

The method that the above-mentioned speech control system 100 plain language sounds of below namely arranging in pairs or groups are controlled.Fig. 3 is the flow chart of the speech control method that illustrates according to one embodiment of the invention.Please be simultaneously with reference to Fig. 1 and Fig. 3, in step 302, auxiliary actuating apparatus 110 sends wireless signal transmission to mobile terminal apparatus 120.Detailed explanation is that when the first wireless transport module 112 of auxiliary actuating apparatus 110 was triggered because receiving a triggering signal, this auxiliary actuating apparatus 110 can send wireless signal transmission to mobile terminal apparatus 120.Particularly, when the trigger module 114 in auxiliary actuating apparatus 110 is pressed by the user, this moment, trigger module 114 meetings be triggered because of triggering signal, and make the first wireless transport module 112 send wireless signal transmission to the second wireless transport module 122 of mobile terminal apparatus 120, use making the first wireless transport module 112 link by wireless communication protocol and the second wireless transport module 122.Above-mentioned auxiliary actuating apparatus 110 only is used for opening the voice system of mobile terminal apparatus 120, does not have the function of listening/conversing, therefore inner circuit design can be simplified, cost is also lower.In other words, for the additional hands-free headsets/microphone of general mobile terminal apparatus 120, auxiliary actuating apparatus 110 is another devices, and namely the user may possess earphone/and the application's of hand-free auxiliary actuating apparatus 110 simultaneously.

It is worth mentioning that, the body of above-mentioned

auxiliary actuating apparatus

110 can be the articles for use that the user can reach conveniently, various Portable article such as ring, wrist-watch, earrings, necklace, glasses, or installation component, for example for being disposed at the driving accessory on steering wheel, be not limited to above-mentioned.That is to say,

auxiliary actuating apparatus

110 is the device of " life-stylize ", by the setting of built-in system, allows the user can touch easily trigger

module

114, with opening voice system 121.Therefore, use the application's

auxiliary actuating apparatus

110

voice system

121 in mobile

terminal apparatus

120 can be opened, even and then open sound amplification function (then will describe in detail), make the user need not wear earphone/, still can directly listen/converse by mobile terminal apparatus 120.In addition, for the user, the article of

auxiliary actuating apparatus

110 for originally wearing or use of these " life-stylizes " are not therefore have in the use the uncomfortable or problem of discomfort.

In addition, the first

wireless transport module

112 and the second

wireless transport module

122 all can be in sleep pattern or mode of operation.Wherein, it is closed condition that sleep pattern refers to wireless transport module, that is wireless transport module can not receive/the detecting wireless signal transmission, and can't link with other wireless transport module.It is opening that mode of operation refers to wireless transport module, that is wireless transport module detecting wireless signal transmission constantly, or sends at any time wireless signal transmission, and can link with other wireless transport module.At this, when

trigger module

114 is triggered, if the first

wireless transport module

112 is in sleep pattern,

trigger module

114 can wake the first

wireless transport module

112 up, make the first

wireless transport module

112 enter mode of operation, and make the first

wireless transport module

112 send wireless signal transmission to the second

wireless transport module

122, and allow the first

wireless transport module

112 link by the second

wireless transport module

122 of wireless communication protocol and mobile

terminal apparatus

120.

On the other hand, continue to maintain mode of operation and consume too much electric power for fear of the first

wireless transport module

112, in Preset Time after the first

wireless transport module

112 enters mode of operation (being for example 5 minutes), if

trigger module

114 is not triggered again, the first

wireless transport module

112 can enter sleep pattern from mode of operation, and stops linking with the second

wireless transport module

120 of mobile

terminal apparatus

120.

Afterwards, in step 304, the second

wireless transport module

122 of mobile

terminal apparatus

120 can receive wireless signal transmission, to start voice system 121.Then, at step S306, when the second

wireless transport module

122 detected wireless signal transmission, mobile

terminal apparatus

120 can start

voice system

121, and 121

sampling modules

124 of voice system can begin received speech signal, for example " temperature several years today ", " phone Lao Wang.", " ask enquiring telephone number." etc.

At step S308,

speech sample module

124 can be sent to above-mentioned voice signal the

speech understanding module

132 in

server

130, to resolve voice signal and to produce response message by speech understanding module 132.Furthermore,

voice identification module

1322 in

speech understanding module

132 can receive the voice signal from

speech sample module

124, and it is semantic that voice signal is divided into a plurality of segmentations,

speech processing module

1324 can be carried out speech understanding to above-mentioned segmentation semanteme, to produce in order to respond the response message of voice signal.

In another embodiment of the present invention, mobile

terminal apparatus

120 more can receive the response message that

speech processing module

1324 produces, and perhaps carries out by interior in

voice output interface

127 output response messages the operation that response message is assigned according to this.At step S310, the voice

synthetic module

126 of mobile

terminal apparatus

120 can receive the response message that

speech understanding module

132 produces, and carry out phonetic synthesis according to the content in response message (such as vocabulary or words and expressions etc.), and produces voice answer-back.And at step S312,

voice output interface

127 can receive and export this voice answer-back.

For example, when the user presses

trigger module

114 in

auxiliary actuating apparatus

110, the first 112 of wireless transport modules can send wireless signal transmission to the second

wireless transport module

122, make mobile

terminal apparatus

120 start the

speech sample module

124 of voice system 121.At this, suppose that the voice signal from the user is an inquiry sentence, for example " temperature several years today ",

speech sample module

124 just can receive and the

speech understanding module

132 that this voice signal is sent in

server

130 is resolved, and

speech understanding module

132 can send back mobile

terminal apparatus

120 with resolving the response message that produces.Suppose that the content in response message that

speech understanding module

132 produces is " 30 ℃ ", voice

synthetic module

126 can synthesize voice answer-back with the message of these " 30 ℃ ", and

voice output interface

127 can should be reported these voice to the user.

In another embodiment, suppose that the voice signal from the user is an imperative sentence, for example " phone Lao Wang.", can pick out this imperative sentence in

speech understanding module

132 and be " dialing to the request of Lao Wang ".In addition,

speech understanding module

132 can produce new response message again, and for example " whether PLSCONFM sets aside Lao Wang ", and the response message that this is new is sent to mobile terminal apparatus 120.At this, voice

synthetic module

126 can synthesize voice answer-back by the response message that this is new, and reports in the user by voice output interface 127.Further say, when the user reply sure answer for "Yes" and so on the time, similarly,

speech sample module

124 can receive and transmit this voice signal to

server

130, to allow

speech understanding module

132 resolve.After

speech understanding module

132 is resolved and finished, just can record a dialing command information at response message, and be sent to mobile terminal apparatus 120.At this moment, the contact information that 128 of communication modules can record according to call database inquires the telephone number of " Lao Wang ", and setting up the conversation line between mobile

terminal apparatus

120 and another electronic installation, that is " Lao Wang " given in dialing.

In other embodiments, except above-mentioned speech control system 100, also can utilize

speech control system

200 or other similar systems, carry out above-mentioned method of operation, not be limited with the above embodiments.

In sum, in the speech control system and method for the present embodiment, auxiliary actuating apparatus can wirelessly be opened the phonetic function of mobile terminal apparatus.And, the body of this auxiliary actuating apparatus can be the user conveniently can and the articles for use of " life-stylize ", ornaments such as ring, wrist-watch, earrings, necklace, glasses, be various Portable article, or installation component, for example for being disposed at the driving accessory on steering wheel, be not limited to above-mentioned.Thus, compared to the discomfort of wearing in addition at present hands-free headsets/microphone, will be more convenient with the voice system that the application's

auxiliary actuating apparatus

110 is opened in mobile

terminal apparatus

120.

It should be noted that above-mentioned

server

130 with speech understanding module may be the webserver or cloud server, and cloud server may relate to the problem of user's the right of privacy.For example, the user need upload complete address list to cloud server, just can complete as calling, send out the operation relevant to address list such as news in brief.Even cloud server adopt to be encrypted line, and instant biography do not preserve, and the load that still is difficult to eliminate the user is excellent.Accordingly, below provide the method for another kind of speech control and corresponding voice interactive system thereof, mobile terminal apparatus can in the situation that do not upload complete address list, be carried out the interactive voice service with cloud server.In order to make content of the present invention more clear, below the example that really can implement according to this as the present invention especially exemplified by embodiment.

Fig. 4 is the calcspar according to the voice interactive system of one embodiment of the invention.Please refer to Fig. 4, voice

interactive system

400 can comprise

cloud server

410 and mobile

terminal apparatus

420, but

cloud server

410 and mobile

terminal apparatus

420 interconnecting lines.Voice

interactive system

400 is to carry out the interactive voice service by cloud server 410.That is, come processed voice identification by the

cloud server

410 with powerful operational capability, reduce by this data of mobile

terminal apparatus

420 and process load, also can promote accuracy and the recognition speed of speech recognition.

In mobile

terminal apparatus

420, comprise processing

unit

422,

communication module

424,

voice system

426, memory cell 428.In one embodiment, mobile

terminal apparatus

420 also disposes a display unit 430.Wherein, processing

unit

422 is coupled to

communication module

424,

voice system

426,

memory cell

428 and display unit 430.More store an

address list

429 in

memory cell

428.

Above-mentioned

processing unit

422 is for possessing the hardware (such as chipset, processor etc.) of operational capability, in order to control the overall operation of mobile terminal apparatus 420.

Processing unit

422 is for example CPU (Central Processing Unit, CPU), or other programmable microprocessors (Microprocessor), digital signal processor (Digital Signal Processor, DSP), Programmable Logic Controller, Application Specific Integrated Circuit (Application Specific Integrated Circuits, ASIC), programmable logic device (Programmable Logic Device, PLD) or other similar devices.

Above-mentioned

communication module

424 is for example network card, and it can be to link up via wire transmission or wireless transmission and cloud server 410.And above-mentioned

voice system

426 comprises the radio reception devices such as microphone at least, so that sound is converted to electronic signal.Said

memory cells

428 is for example random access memory (RandomAccess Memory, RAM), read-only memory (Read-Only Memory, ROM), flash memory (Flash memory) or disk storage device (Magnetic disk storage device) etc.Above-mentioned

display unit

430 is for example liquid crystal display (Liquid Crystal Display, LCD) or the Touch Screen (touch screen) with touch-control module etc.

On the other hand, the

cloud server

410 entity main frame for having powerful operational capability, a super virtual machine that perhaps can be comprised of a group entity main frame uses to carry out large-scale task.At this,

cloud server

410 comprises processing

unit

412 and communication module 414.At this, the

communication module

414 of

cloud server

410 is coupled to its processing unit 412.

Communication module

414 is in order to link up with the

communication module

424 of mobile terminal apparatus 420.

Communication module

414 is for example network card, and it can be to link up via wire transmission or wireless transmission and mobile

terminal apparatus

420.

In addition, processing unit in

cloud server

410 412 is for having more powerful operational capability, for example by the CPU of multi-core or formed the CPU array by a plurality of CPU.The

processing unit

412 of

cloud server

410 for example comprises

speech understanding module

132 as shown in Figure 1 at

least.Processing unit

412 can be resolved the voice signal that receives from mobile

terminal apparatus

420 by the speech understanding module.And the result that

cloud server

410 will be resolved by

communication module

414 is sent to mobile

terminal apparatus

420, makes mobile

terminal apparatus

420 be able to carry out corresponding action according to result.

The above-mentioned Fig. 4 that below namely arranges in pairs or groups is illustrated in the exchange of speech flow process of voice interactive system.

Fig. 5 is the schematic diagram according to the voice communication flow process that is used for voice interactive system of one embodiment of the invention.Please simultaneously with reference to Fig. 4 and Fig. 5, in step S501, in mobile

terminal apparatus

420, receive the first voice signal by

voice system

426, and in step S503, by

communication module

424, the first voice signal is sent to cloud server 410.At this, mobile

terminal apparatus

420 is such as being by elements such as the microphones in

voice system

426 and receive the first voice signal from the user.For instance, suppose that mobile

terminal apparatus

420 is mobile phone, the user says " phoning Lao Wang " facing to mobile phone,

voice system

426 is receiving this voice signal " phone Lao Wang " after, can be by

communication module

424 with this voice signal " phone Lao Wang " be sent to cloud server 410.In one embodiment, above-mentioned

voice system

426 can start by Fig. 1 ~ auxiliary actuating apparatus shown in Figure 3.

Then, in step S505, beyond the clouds in

server

410, processing

unit

412 utilizes the speech understanding module to resolve the first voice signal, and, in step S507, the communication target that

processing unit

412 will be obtained by the first voice signal is sent to mobile

terminal apparatus

420 by communication module 414.Content take the first voice signal " is phoned Lao Wang " as example, and the

processing unit

412 of

cloud server

410 can utilize the speech understanding module to resolve the first voice signal, obtains by this communication instruction and communication target.Namely, the speech understanding module can parse the first voice signal and comprise " making a phone call " and " Lao Wang ", accordingly, the

processing unit

412 of

cloud server

410 just can be judged communication instruction and be the dialing instruction, and communication target is " Lao Wang ", and is sent to mobile

terminal apparatus

420 by

communication module

414.

Then, in step S509, in mobile

terminal apparatus

420, the

processing unit

422 of mobile

terminal apparatus

420 is according to the

address list

429 in communication target

search memory cell

428, and acquisition meets the selective listing of communication target.For example, the

processing unit

422 of mobile

terminal apparatus

420 finds many and has the contact information of " king ", thereby produce selective listing, and be shown in

display unit

430 in the process of searching address list, selects for the user.

For instance, shown in table 1, search the contact information that meets communication target " Lao Wang " in address list under selective listing for example.In this example, suppose the contact information that finds 4 to meet, and with the coordinator's title in contact information, namely " Wang Congming ", " king five ", " Wang Anshi " and " Wang Wei ", write in selective listing.

Figure BDA00002690271500131

Table 1

And if the user speaks facing to mobile

terminal apparatus

420, as shown in step S511, mobile

terminal apparatus

420 can receive the second voice signal by voice system 426.And when mobile

terminal apparatus

420 received the second voice signal, in step S513, mobile

terminal apparatus

420 can be sent to

cloud server

410 by

communication module

424 simultaneously with the second voice signal and selective listing.Such as: user after watching selective listing and say " the 1st " or contents such as " Wang Congming " facing to mobile

terminal apparatus

420, and when forming the second voice signal, mobile

terminal apparatus

420 just can be sent to the second voice

signal cloud server

410 together with selective listing.

In addition, the user also can arbitrarily say other guide, that is to say, no matter the content that the user says why, as long as mobile

terminal apparatus

420 receives the second voice signal, just can be sent to

cloud server

410 with the second voice signal and selective listing simultaneously.

It is worth mentioning that, in this application, the address list with " complete " is not uploaded to

cloud server

410, and only will meet communication target with the form of " selective listing ", is uploaded to

cloud server

410 to carry out speech signal analysis for the second time.In other words, only have " part " contact data can be uploaded.In one embodiment, mobile

terminal apparatus

420 is uploaded in the selective listing of

cloud server

410 can include only coordinator's title, and does not comprise telephone number or other information.The content of the selective listing of uploading can be set according to user's demand.

In addition, it should be noted that, in this application, the second voice signal and selective listing are sent to

cloud server

410 simultaneously, need gradation to resolve each voice signal and each list compared to the communication means that need not upload at present address list, namely a step only comprises an information, and the application's voice switching method is more quick.

Then, in

server

410, processing

unit

412 can utilize the speech understanding module to resolve the second voice signal, as shown in step S515 beyond the clouds.For example, utilize the speech understanding module parses to go out the included content of the second voice signal and be " the 3rd ", the

processing unit

412 of

cloud server

410 just can further remove to compare the 3rd contact information in the selective listing that mobile

terminal apparatus

420 receives.Take table 1 as example, the 3rd contact information is " Wang Anshi ".

It should be noted that, the design of the

speech understanding module

132 by as shown in Figure 1, the user does not need the complete content of selective listing of telling as the second voice signal, as " the 1st Wang Congming ", only need tell the content of part selective listing, as the second voice signal, and the selective listing of arranging in pairs or groups simultaneously is uploaded to the

speech understanding module

132 of cloud server, can parse select target as " the 1st " or " Wang Congming ".In other words, the selective listing content comprises a plurality of project information, and each project information has the content (as: name, telephone number etc.) of numbering and corresponding this numbering at least, and the second voice signal comes from partial content or the numbering of corresponding this numbering.

Afterwards, in step S517,

cloud server

410 is sent to mobile

terminal apparatus

420 by its

communication module

414 with communication instruction and select target.And in other embodiments,

cloud server

410 also can namely first transmit communication instruction to mobile

terminal apparatus

420 storages after step S505 has resolved the first voice signal, transmits afterwards select target again, does not limit the delivery time point of communication instruction at this.

After mobile terminal apparatus 420 received communication instruction and select target, in step S519, mobile terminal apparatus 420 was by 422 pairs of select targets of its processing unit, the communication operation that the executive communication instruction is corresponding.Above-mentioned communication instruction is such as needing to use the instruction of this address list content for dialing instruction or citation instruction etc., and communication instruction is to be obtained based on the first voice signal by cloud server 410.For example, the content of supposing the first voice signal is " phoning Lao Wang ", and cloud server 410 is judged communication instruction and is the dialing instruction by " making a phone call ".Again for example, suppose that the content of the first voice signal is " passing news in brief to Lao Wang ", cloud server 410 is judged communication instruction for summoning instruction by " biography news in brief ".In addition, above-mentioned select target is based on the second voice signal and selective listing and obtain by cloud server 410.Take the selective listing shown in above-mentioned table 1 as example, the content of supposing the second voice signal is " the 3rd ", and cloud server 410 just can be judged select target and is " Wang Anshi ".For example, call to select target, or start a citation interface, to transmit news in brief to select target.

It should be noted that mobile

terminal apparatus

420 can include only coordinator's title in the selective listing that above-mentioned steps S509 obtains, and do not comprise telephone number or other information.Therefore, when mobile

terminal apparatus

420 when

cloud server

410 receives communication instruction and select target, the

processing unit

422 of mobile

terminal apparatus

420 can take out the telephone number of corresponding selection target in address list, and comes communication operation corresponding to executive communication instruction according to telephone number.

In addition, in other embodiments, mobile

terminal apparatus

420 also can comprise coordinator's title and telephone number simultaneously in the selective listing that above-mentioned steps S509 obtains, perhaps also can comprise other information.Therefore, in step S515, the

processing unit

412 of

cloud server

410 just can be based on the second voice signal and selective listing, and obtains the telephone number of select target, and in step S517, communication instruction and telephone number are sent to mobile terminal apparatus 420.Accordingly, in step S519, mobile

terminal apparatus

420 comes communication operation corresponding to executive communication instruction according to telephone number.

In sum, the application's utilization is uploaded simultaneously mode to the cloud server with powerful operational capability of selective listing that the first voice produce, select target that the second voice signal produces and is carried out speech understanding program, and this selective listing only comprises the address list of part.Therefore, the application's speech control system can be possessed higher treatment efficiency and better fail safe simultaneously.

On the other hand, although it should be noted that above-mentioned auxiliary actuating apparatus has solved the user and can't touch immediately mobile terminal apparatus, need to use the problem of voice system, make the user can pass through the speech understanding technology, allow user and mobile terminal apparatus carry out question and answer.Yet, for the situation that needs public address system to open, need be still to start expansion by mobile terminal apparatus itself at present, therefore can't satisfy user's demand immediately.For this reason, the present invention proposes a kind of method and corresponding device thereof of opening public address system, allows the user can open more easily public address system.In order to make content of the present invention more clear, below the example that really can implement according to this as the present invention especially exemplified by embodiment.

On the other hand, although it should be noted that above-mentioned auxiliary actuating apparatus has solved the user and can't touch immediately mobile terminal apparatus, need to use the voice system problem, make the user can pass through the speech understanding technology, allow user and mobile terminal apparatus carry out question and answer.Yet, for the situation that needs sound amplification function to open, still need at present to start sound amplification function by mobile terminal apparatus itself, when the user can't touch mobile terminal apparatus immediately, but in the time of need making sound amplification function, the design that needs at present to start by mobile terminal apparatus itself will cause user's inconvenience.For this reason, the present invention proposes a kind of method and corresponding device thereof of opening sound amplification function, allows the user can open more easily sound amplification function.In order to make content of the present invention more clear, below the example that really can implement according to this as the present invention especially exemplified by embodiment.

Fig. 6 is the system schematic according to the mobile terminal apparatus of one embodiment of the invention.Please refer to Fig. 6, in the present embodiment, mobile terminal apparatus 600 comprises voice system, input unit 620, pulls and connects unit 630, receiver 640, public address equipment 650 and processing unit 660.In another embodiment of the present invention, mobile terminal apparatus 600 also can comprise earphone 670.Mobile terminal apparatus 600 can be mobile phone or other similar electronic installations, and it is similar to the mobile terminal apparatus 120 of Fig. 1, and its detailed content can with reference to aforementioned content, not repeat them here.Processing unit 660 couples speech sample module 610, input unit 620, pulls and connects unit 630, receiver 640, public address equipment 650, earphone 670.Voice system comprises speech sample module 610, and this speech sample module 610 is converted to input speech signal SAI with sound, and above-mentioned speech sample module 610 can be microphone or similar electronic component.In other words, speech sample module 610 can be considered the part of voice system, and this described voice system is similar to the voice system 121 of Fig. 1, and its detailed content can with reference to aforementioned content, not repeat them here.Input unit 620 corresponding users' operation provides input operation signal SIO, and input unit 620 can be keyboard, contact panel or similar electronic component.Pull and connect unit 630 and pull and connect function in order to be controlled by processing unit 660 execution.Receiver 640, public address equipment 650, earphone 670 are converted to sound in order to the output voice signal SAO that processing unit 660 is provided, therefore can be considered the voice output interface.Above-mentioned public address equipment 650 is such as being loud speaker etc.Above-mentioned earphone 670 can be wired earphone and wireless headset at least one of them.

As from the foregoing, the physical button that the unlatching of phonetic function can be by pressing mobile communications device, control screen or utilize auxiliary actuating apparatus of the present invention.In the situation that the hypothesis phonetic function is opened, when the user talks facing to mobile terminal apparatus 600, sound can be converted to input speech signal SAI by speech sample module 610, processing unit 660 can be according to input speech signal SAI, when carrying out content matching for information such as the coordinator's title in address list or telephone numbers, when the information in address list conforms to input speech signal SAI, 660 of processing units can open pull and connect unit 630 pull and connect function and public address equipment 650, so that after connecting, the user can with coordinator's conversation.Detailed explanation is that processing unit 660 can be converted to input speech signal SAI one input word string, and will input the information such as a plurality of coordinator's titles in word string and address list, a plurality of telephone numbers relatively.When the input word string met one of them of the information such as these coordinator's titles, these telephone numbers, processing unit 660 was opened the function of pulling and connecting of pulling and connecting unit 630.On the contrary, when the input word string did not meet these coordinator's titles and these telephone numbers, processing unit 660 was not opened the function of pulling and connecting of pulling and connecting unit 630.

In other words, in the present embodiment, when the content matching in

processing unit

660 confirmation input speech signal SAI and address list, processing

unit

660 can provide enabling signal, in order to automatically open the conversation sound amplification function of mobile terminal apparatus 100.Specifically, processing

unit

660 can supply enabling signal to

public address equipment

650 by automatic lifting, and input speech signal SAI is converted to conversation transmits data DTC, and transmit conversation transmission data DTC to coordinator (another mobile terminal apparatus does not illustrate) by pulling and connecting unit 630.Simultaneously, processing

unit

660 can receive conversation receive data DRC by pulling and connecting

unit

630, and provide output audio signal SAO to

public address equipment

650 according to conversation receive data DRC, so that output audio signal SAO is converted to sound, and in the mode that amplifies with voice output.

It is worth mentioning that, in the mode of present startup sound amplification function, be still and adopt the mode that starts by mobile terminal apparatus itself to carry out, can't touch immediately mobile terminal apparatus but work as the user, in the time of but need using sound amplification function, present design will cause user's inconvenience.So in the present embodiment, in the situation that voice system opens, sound amplification function is further opened in the action that can pull and connect by voice, is user-friendly for conversation.

In another embodiment, when

public address equipment

650 and

earphone

670 all with the situation of mobile

terminal apparatus

600 lines under (being that

public address equipment

650 all couples processing unit with earphone 670), if provide to

processing unit

660 and be input speech signal SAI, processing

unit

660 can be according to user's setting, making

earphone

670 conversations is the first preferential talking mode (preset value), and

public address equipment

650 is the second preferential talking mode.Perhaps,

public address equipment

650 is made as the first preferential talking mode (preset value),

earphone

670 conversations are made as the second preferential talking mode.

In addition, in another embodiment, when the user provides input operation signal SIO by

input unit

620, the expression user can't not touch the problem of mobile terminal apparatus immediately, therefore carry out the address book data coupling according to input operation signal SIO at

processing unit

660 after, by processing

unit

660, pull and connect

unit

630 and output audio signal SAO can be sent to the voice output interfaces such as

public address equipment

650,

receiver

640 or

earphone

670, it is looked closely the output interface (preset value) that the user presets and decides.

For instance, when the user says " phoning Lao Wang " facing to mobile terminal apparatus, after speech sample module 610 receives these sound at this moment, it is changed into input speech signal SAI, and this input speech signal SAI is by the parsing of speech understanding module, (for example: Lao Wang), and and then (for example: Wang Anshi) obtain select target obtain communication instruction (for example: make a phone call) and communication target.Owing to being the communication instruction of resolving from " voice ", therefore processing unit 660 automatic liftings are opened public address equipment 650 for enabling signal, in order to the follow-up conversation that amplifies.That is to say, when pull and connect the unit complete pull and connect after, the user can utilize public address equipment directly to talk with Lao Wang.Perhaps, in another example, when the user says facing to mobile terminal apparatus " answer the call ", after speech sample module 610 receives these sound at this moment, it is changed into input speech signal SAI, and this input speech signal SAI obtains communication instruction (as: answering the call) by the parsing of speech understanding module.Owing to being the communication instruction of resolving from " voice ", therefore processing unit 660 automatic liftings are opened public address equipment 650 for enabling signal, can utilize public address equipment directly and the Lao Wang dialogue in order to the user.Embodiment about configuration mode and the correlative detail of above-mentioned speech understanding module has been described in the front does not repeat them here.In addition, about communication target and last resulting select target, its execution mode can be taked aforementioned method or other the similar methods of utilizing cloud server, does not repeat them here.Certainly, as mentioned above, in public address equipment 650 and earphone 670 and the situation of depositing, processing unit 660 can be according to user's setting, and making earphone 670 conversations is the first preferential talking mode, and public address equipment 650 is the second preferential talking mode.

In another example, if the user is by the

display unit

430 of similar Fig. 4, when utilizing button or touch-control to select " Wang Anshi " in address list, owing to being when providing input operation signal SIO by

input unit

620, processing

unit

660 can carry out the address book data coupling according to input operation signal SIO, and by processing

unit

660, pull and connect

unit

630 and user's setting, output audio signal SAO is sent to the voice output interfaces such as

public address equipment

650,

receiver

640 or

earphone

670, makes the user to talk with Wang Anshi.

According to above-mentioned, can converge the whole automatic starting method that goes out a kind of sound amplification function of conversing of a mobile terminal apparatus.Fig. 7 is the flow chart according to the automatic starting method of the conversation sound amplification function of the mobile terminal apparatus of one embodiment of the invention.Please simultaneously with reference to Fig. 7, in the present embodiment, judge whether the

processing unit

660 of mobile

terminal apparatus

600 pulls and connects function (step S710) with unlatching.In other words, from the input speech signal SAI of the input operation signal SIO of

input unit

620 or

speech sample module

610 may not with pull and connect relevantly, it might be the operation of carrying out other.Such as: enable the computer function in mobile terminal apparatus or utilize voice system inquiry weather etc.When processing

unit

660 according to input-signal judging with unlatching pull and connect

unit

630 pull and connect function the time, that is input signal and to pull and connect action relevant, step S710 judgment result is that "Yes", execution in step S720; Otherwise, when processing

unit

660 will can not be pulled and connected function according to input-signal judging, that is step S710 judgment result is that "No", finish the automatic starting method of this conversation sound amplification function.

Then, in step S720, judge whether processing unit 660 receives the input speech signal SAI that pulls and connects function in order to unlatching.When processing unit 660 receive from speech sample module 610 pull and connect the input speech signal SAI of function in order to unlatching the time, that is step S720 judgment result is that "Yes", can Check processing unit 660 whether be connected (step S730) with earphone 670.When processing unit 660 is connected with earphone 670, that is step S730 judgment result is that "Yes", processing unit 660 automatic liftings for enabling signals starting earphone, and output audio signal SAO is to earphone 670(step S740); Otherwise, when processing unit 660 is not connected with earphone 670, that is step S730 judgment result is that "No", processing unit 660 automatic liftings supply enabling signal to start public address equipment 650, and output voice signal SAO is to the public address equipment 650 of mobile terminal apparatus 600, to open the conversation sound amplification function (step S750) of mobile terminal apparatus 600.It is worth mentioning that, when processing unit 660 receives when pulling and connecting the input speech signal of function in order to unlatching, above-mentioned step 730 ~ step 750 is in the situation that the user is set as preferential voice output interface (suppose all lines of public address equipment 650 and earphone 670) with earphone 670 carries out.In other embodiments, the user also can be set as public address equipment 650 preferential voice output interface.Certainly, when earphone 670 and public address equipment 650 only have one of them line, can set equipment on line as preferential voice output interface.Above-mentioned implementation step is to know the operator can do corresponding change according to its demand.

On the other hand, when processing unit 660 do not receive from speech sample module 610 pull and connect the input speech signal SAI of function in order to unlatching the time, that is step S720 judgment result is that "No", can follow Check processing unit 660 and whether be connected (step S760) with earphone 670.Specifically, processing unit 660 does not receive the input speech signal SAI from speech sample module 610, but processing unit is pulled and connected function with unlatching again, and expression processing unit 660 receives the input operation signal SIO from input unit 620, and this input operation signal SIO and pull and connect move relevant.When processing unit 660 is connected with earphone 670, that is step S760 judgment result is that "Yes", processing unit 660 can automatic liftings for enabling signals starting earphone 670, and output voice signal SAO is to earphone 670(step S740).Otherwise, when processing unit 660 is not connected with earphone 670, that is step S760 judgment result is that "No", processing unit 660 provides output voice signal SAO one of them (step S770) to public address equipment and receiver according to preset values.Wherein, the order of above-mentioned steps is the use as explanation, and the embodiment of the present invention is not as limit.It is worth mentioning that, when step 760 is judged as " be ", will provide output audio signal SAO to earphone 670, above-mentioned condition is set as earphone 670 situation of preferential voice output interface (supposing all lines of receiver 640, public address equipment 650, earphone 670) for the user.In other embodiments, the user also can be set as preferential voice output interface with receiver 640 or public address equipment 650.Certainly, at receiver 640, public address equipment 650, when earphone 670 equipment only have one of them line, can set equipment on line as preferential voice output interface.Above-mentioned implementation step is to know the operator can do corresponding change according to its demand.

In sum, the automatic starting method of the mobile terminal apparatus of the embodiment of the present invention and conversation sound amplification function thereof, when the processing unit reception is pulled and connected the input speech signal of function in order to unlatching, pull and connect function except opening, more can automatically open sound amplification function, will export voice signal to public address equipment.Thus, when the user can't touch mobile terminal apparatus immediately, but sound amplification function need be made the time, can start sound amplification function by voice system, to improve the ease of use of mobile terminal.

Although the present invention with embodiment openly as above; so it is not to limit the present invention; those skilled in the art when doing a little change and retouching, are as the criterion therefore protection scope of the present invention ought be looked the appended claims confining spectrum without departing from the spirit and scope of the present invention.

Claims (29)

1.一种语音交互系统,包括:1. A voice interaction system, comprising: 一移动终端装置,包括:A mobile terminal device, comprising: 一语音系统,分别接收一第一语音信号与一第二语音信号;A voice system, respectively receiving a first voice signal and a second voice signal; 一第一通信模块,分别传送该第一语音信号与该第二语音信号;以及a first communication module, respectively transmitting the first voice signal and the second voice signal; and 一第一处理单元,耦接该第一通信模块以及该语音系统;以及a first processing unit, coupled to the first communication module and the voice system; and 一云端服务器,适与该移动终端装置连线,a cloud server, suitable for connecting with the mobile terminal device, 其中该云端服务器接收来自该第一通信模块的该第一语音信号,并且依据该第一语音信号解析出一通信目标与一通信指令;该第一处理单元接收该通信目标,并依据该通信目标搜寻位于该移动终端装置的一通讯录,以获得符合该通信目标的一选择列表,并且在该语音系统接收该第二语音信号时,通过该第一通信模块同时传送该第二语音信号与该选择列表至该云端服务器以产生一选择目标;该第一处理单元接收并执行该通信指令与该选择目标。Wherein the cloud server receives the first voice signal from the first communication module, and analyzes a communication object and a communication instruction according to the first voice signal; the first processing unit receives the communication object, and analyzes the communication object according to the communication object Searching an address book located in the mobile terminal device to obtain a selection list matching the communication target, and when the voice system receives the second voice signal, simultaneously transmitting the second voice signal and the second voice signal through the first communication module The selection list is sent to the cloud server to generate a selection object; the first processing unit receives and executes the communication instruction and the selection object. 2.如权利要求1所述的语音交互系统,其中该通信指令为需使用该通讯录内容的指令。2. The voice interaction system according to claim 1, wherein the communication instruction is an instruction to use the contents of the address book. 3.如权利要求2所述的语音交互系统,其中该通信指令包括拨号、传简讯。3. The voice interaction system according to claim 2, wherein the communication instruction includes dialing a number and sending a short message. 4.如权利要求1所述的语音交互系统,其中该选择列表包括多个项目信息,每一项目信息包括一编号及对应该编号的内容,该第二语音信号与对应该编号的部分内容或该编号相关。4. The voice interaction system as claimed in claim 1, wherein the selection list includes a plurality of item information, each item information includes a number and the content corresponding to the number, and the second voice signal corresponds to a part of the content of the number or This number is relevant. 5.如权利要求1所述的语音交互系统,其中依据该通信目标,该选择列表包括部分的通讯录内容。5. The voice interaction system according to claim 1, wherein according to the communication target, the selection list includes part of the content of the address book. 6.如权利要求1所述的语音交互系统,还包括一存储单元,用以存储该通讯录。6. The voice interaction system according to claim 1, further comprising a storage unit for storing the address book. 7.如权利要求1所述的语音交互系统,还包括一显示单元,以显示该选择列表提供使用者进行一选择,并基于该选择产生该第二语音信号。7. The voice interaction system as claimed in claim 1, further comprising a display unit for displaying the selection list for a user to make a selection, and generating the second voice signal based on the selection. 8.如权利要求1所述的语音交互系统,其中该云端服务器包括:8. The voice interactive system as claimed in claim 1, wherein the cloud server comprises: 一第二处理单元,具有一语音处理模块,通过该语音处理模块来解析该第一语音信号与该第二语音信号,并且基于该第二语音信号以及该选择列表而获得该选择目标;以及A second processing unit, having a voice processing module, analyzing the first voice signal and the second voice signal through the voice processing module, and obtaining the selection target based on the second voice signal and the selection list; and 一第二通信模块,耦接至该第二处理单元,并且与该第一通信模块进行沟通;a second communication module, coupled to the second processing unit, and communicating with the first communication module; 其中,该云端服务器通过该第二通信模块传送该通信指令与该选择目标至该移动终端装置,使得该移动终端装置依据该选择目标来执行该通信指令对应的一通信动作。Wherein, the cloud server transmits the communication instruction and the selected object to the mobile terminal device through the second communication module, so that the mobile terminal device executes a communication action corresponding to the communication instruction according to the selected object. 9.如权利要求1所述的语音交互系统,其中该云端服务器基于该第二语音信号以及该选择列表而获得该选择目标的一电话号码,并传送该通信指令与该电话号码至该移动终端装置,使得该移动终端装置依据该电话号码来执行该通信指令对应的该通信动作。9. The voice interaction system according to claim 1, wherein the cloud server obtains a phone number of the selected target based on the second voice signal and the selection list, and transmits the communication command and the phone number to the mobile terminal device, so that the mobile terminal device executes the communication action corresponding to the communication instruction according to the phone number. 10.如权利要求1所述的语音交互系统,其中在该移动终端装置中,该第一处理单元自该通讯录中取出对应该选择目标的电话号码,以依据该电话号码来执行该通信指令对应的该通信动作。10. The voice interaction system according to claim 1, wherein in the mobile terminal device, the first processing unit fetches the phone number corresponding to the selected target from the address book, so as to execute the communication instruction according to the phone number Corresponding to the communication action. 11.一种移动终端装置,适与一云端服务器连线,包括:11. A mobile terminal device suitable for connecting with a cloud server, comprising: 一语音系统,分别接收一第一语音信号与一第二语音信号;A voice system, respectively receiving a first voice signal and a second voice signal; 一通信模块,分别传送该第一语音信号与该第二语音信号;以及a communication module, respectively transmitting the first voice signal and the second voice signal; and 一处理单元,耦接该通信模块以及该语音系统,a processing unit, coupled to the communication module and the voice system, 其中该通信模块传送该第一语音信号至该云端服务器,并且该云端服务器依据该第一语音信号解析出一通信目标与一通信指令;该处理单元接收该通信目标,并依据该通信目标搜寻位于该移动终端装置的一通讯录,以获得符合该通信目标的一选择列表,并且在该语音系统接收该第二语音信号时,通过该通信模块同时传送该第二语音信号与该选择列表至该云端服务器以产生一选择目标;该处理单元接收并执行该通信指令与该选择目标。Wherein the communication module transmits the first voice signal to the cloud server, and the cloud server parses out a communication target and a communication command according to the first voice signal; the processing unit receives the communication target, and searches for the communication target according to the communication target An address book of the mobile terminal device to obtain a selection list that meets the communication target, and when the voice system receives the second voice signal, simultaneously transmit the second voice signal and the selection list to the communication module through the communication module The cloud server generates a selection object; the processing unit receives and executes the communication instruction and the selection object. 12.如权利要求11所述的移动终端装置,其中该通信指令为需使用该通讯录内容的指令。12. The mobile terminal device as claimed in claim 11, wherein the communication instruction is an instruction to use the contents of the address book. 13.如权利要求12所述的移动终端装置,其中该通信指令包括拨号、传简讯。13. The mobile terminal device as claimed in claim 12, wherein the communication command includes dialing and sending a short message. 14.如权利要求11所述的移动终端装置,其中该选择列表包括多个项目信息,每一项目信息包括一编号及对应该编号的内容,该第二语音信号与对应该编号的部分内容或该编号相关。14. The mobile terminal device as claimed in claim 11, wherein the selection list includes a plurality of item information, each item information includes a number and the content corresponding to the number, and the second voice signal corresponds to a part of the content of the number or This number is relevant. 15.如权利要求11所述的移动终端装置,其中依据该通信目标,该选择列表包括部分的通讯录内容。15. The mobile terminal device as claimed in claim 11, wherein according to the communication object, the selection list includes part of the contents of the address book. 16.如权利要求11所述的移动终端装置,还包括一存储单元,用以存储该通讯录。16. The mobile terminal device as claimed in claim 11, further comprising a storage unit for storing the address book. 17.如权利要求11所述的移动终端装置,还包括一显示单元,以显示该选择列表提供使用者进行一选择,并基于该选择产生该第二语音信号。17. The mobile terminal device as claimed in claim 11, further comprising a display unit for displaying the selection list for a user to make a selection, and generating the second voice signal based on the selection. 18.如权利要求11所述的移动终端装置,其中该云端服务器基于该第二语音信号以及该选择列表而获得该选择目标的一电话号码,并传送该通信指令与该电话号码至该移动终端装置,使得该移动终端装置依据该电话号码来执行该通信指令对应的该通信动作。18. The mobile terminal device as claimed in claim 11, wherein the cloud server obtains a phone number of the selected target based on the second voice signal and the selection list, and transmits the communication command and the phone number to the mobile terminal device, so that the mobile terminal device executes the communication action corresponding to the communication instruction according to the phone number. 19.如权利要求11所述的移动终端装置,其中在该移动终端装置中,该处理单元自该通讯录中取出对应该选择目标的电话号码,以依据该电话号码来执行该通信指令对应的该通信动作。19. The mobile terminal device as claimed in claim 11, wherein in the mobile terminal device, the processing unit retrieves the phone number corresponding to the selected target from the address book, so as to execute the communication instruction corresponding to the phone number according to the phone number. The communication action. 20.一种语音通信的方法,用于一移动终端装置,该方法包括:20. A method for voice communication, used for a mobile terminal device, the method comprising: 接收一第一语音信号,并传送该第一语音信号至一云端服务器;receiving a first voice signal, and sending the first voice signal to a cloud server; 自该云端服务器接收从该第一语音信号所解析出的一通信目标;receiving a communication object parsed from the first voice signal from the cloud server; 依据该通信目标搜寻该移动终端装置中的一通讯录,以获得符合该通信目标的一选择列表;Searching an address book in the mobile terminal device according to the communication object to obtain a selection list matching the communication object; 接收一第二语音信号,同时传送该第二语音信号与该选择列表至该云端服务器;以及receiving a second voice signal, and simultaneously sending the second voice signal and the selection list to the cloud server; and 自该云端服务器接收并执行一通信指令与一选择目标。Receive and execute a communication instruction and a selected object from the cloud server. 21.如权利要求20所述的语音通信的方法,其中该通信指令为需使用该通讯录内容的指令。21. The voice communication method as claimed in claim 20, wherein the communication instruction is an instruction to use the contents of the address book. 22.如权利要求21所述的语音通信的方法,其中该通信指令包括拨号、传简讯。22. The voice communication method as claimed in claim 21, wherein the communication instruction includes dialing a number and sending a short message. 23.如权利要求20所述的语音通信的方法,其中该选择列表包括多个项目信息,每一项目信息包括一编号及对应该编号的内容,该第二语音信号与对应该编号的部分内容或该编号相关。23. The method of voice communication as claimed in claim 20, wherein the selection list includes a plurality of item information, each item information includes a number and the content corresponding to the number, the second voice signal and the part of the content corresponding to the number or that number is relevant. 24.如权利要求20所述的语音通信的方法,其中依据该通信目标,该选择列表包括部分的通讯录内容。24. The method for voice communication as claimed in claim 20, wherein according to the communication target, the selection list includes part of the contents of the address book. 25.如权利要求20所述的语音通信的方法,其中该通信指令是由该云端服务器基于该第一语音信号而获得,而该选择目标是由该云端服务器基于该第二语音信号以及该选择列表而获得。25. The voice communication method according to claim 20, wherein the communication instruction is obtained by the cloud server based on the first voice signal, and the selection target is obtained by the cloud server based on the second voice signal and the selected list obtained. 26.如权利要求20所述的语音通信的方法,其中依据该通信目标搜寻该移动终端装置中的该通讯录,以获得符合该通信目标的该选择列表的步骤包括:26. The method for voice communication as claimed in claim 20, wherein the step of searching the address book in the mobile terminal device according to the communication target to obtain the selection list matching the communication target comprises: 在该通讯录中搜寻符合该通信目标的联络人信息;以及Search the address book for contact information matching the communication objective; and 写入该联络人信息至该选择列表,其中该联络人信息至少包括一联络人名称。Writing the contact information into the selection list, wherein the contact information at least includes a contact name. 27.如权利要求20所述的语音通信的方法,其中在依据该通信目标搜寻该移动终端装置中的该通讯录,以获得符合该通信目标的该选择列表的步骤之后,还包括:27. The method for voice communication as claimed in claim 20, wherein after the step of searching the address book in the mobile terminal device according to the communication target to obtain the selection list matching the communication target, further comprising: 显示该选择列表,以供一使用者进行一选择,并基于该选择产生该第二语音信号;以及displaying the selection list for a user to make a selection, and generating the second voice signal based on the selection; and 接收该第二语音信号。The second voice signal is received. 28.如权利要求20所述的语音通信的方法,其中在接收到该第二语音信号时,同时传送该第二语音信号与该选择列表至该云端服务器的步骤之后,还包括:28. The voice communication method as claimed in claim 20, wherein after receiving the second voice signal, after the step of simultaneously transmitting the second voice signal and the selection list to the cloud server, further comprising: 自该云端服务器接收该选择目标的一电话号码,以依据该电话号码来执行该通信指令对应的一通信动作。A phone number of the selected target is received from the cloud server, so as to execute a communication action corresponding to the communication command according to the phone number. 29.如权利要求20所述的语音通信的方法,还包括:29. The method of voice communication as claimed in claim 20, further comprising: 当自该云端服务器接收到该通信指令与该选择目标时,自该通讯录中取出对应该选择目标的电话号码,以依据该电话号码来执行该通信指令对应的一通信动作。When the communication command and the selected target are received from the cloud server, the phone number corresponding to the selected target is fetched from the address book, so as to execute a communication action corresponding to the communication command according to the phone number.

CN201210592490.XA 2012-12-31 2012-12-31 Voice interaction system, mobile terminal device and voice communication method Pending CN103095813A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201210592490.XA CN103095813A (en) 2012-12-31 2012-12-31 Voice interaction system, mobile terminal device and voice communication method
CN201310182848.6A CN103281466B (en) 2012-12-31 2013-05-17 Voice interactive system, mobile terminal device and method for voice communication
TW102121754A TWI497408B (en) 2012-12-31 2013-06-19 Voice interaction system, mobile terminal apparatus and method of voice communication
US13/923,383 US8934886B2 (en) 2012-12-31 2013-06-21 Mobile apparatus and method of voice communication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210592490.XA CN103095813A (en) 2012-12-31 2012-12-31 Voice interaction system, mobile terminal device and voice communication method

Publications (1)

Publication Number Publication Date
CN103095813A true CN103095813A (en) 2013-05-08

Family

ID=48207936

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201210592490.XA Pending CN103095813A (en) 2012-12-31 2012-12-31 Voice interaction system, mobile terminal device and voice communication method
CN201310182848.6A Active CN103281466B (en) 2012-12-31 2013-05-17 Voice interactive system, mobile terminal device and method for voice communication

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201310182848.6A Active CN103281466B (en) 2012-12-31 2013-05-17 Voice interactive system, mobile terminal device and method for voice communication

Country Status (2)

Country Link
CN (2) CN103095813A (en)
TW (1) TWI497408B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103701972A (en) * 2013-11-28 2014-04-02 苏州佳世达电通有限公司 Method and system for intelligently replying voice message
CN104702789A (en) * 2015-03-11 2015-06-10 安徽声讯信息技术有限公司 Smart phone with voice control function and voice control method thereof
CN105788594A (en) * 2016-03-01 2016-07-20 江西掌中无限网络科技股份有限公司 Voice and meaning identification method and system of flow-free APP
CN106875939A (en) * 2017-01-13 2017-06-20 佛山市父母通智能机器人有限公司 To the Chinese dialects voice recognition processing method and intelligent robot of wide fluctuations
CN106952646A (en) * 2017-02-27 2017-07-14 深圳市朗空亿科科技有限公司 A kind of robot interactive method and system based on natural language
CN108417215A (en) * 2018-04-27 2018-08-17 三星电子(中国)研发中心 A kind of playback equipment exchange method and device
CN109587664A (en) * 2018-11-14 2019-04-05 深圳市芯中芯科技有限公司 A kind of voice dialing system of edge calculations in conjunction with cloud computing
WO2019206184A1 (en) * 2018-04-24 2019-10-31 北京中科寒武纪科技有限公司 Allocation system, method and apparatus for machine learning, and computer device
WO2020082710A1 (en) * 2018-10-22 2020-04-30 深圳市冠旭电子股份有限公司 Voice interaction control method, apparatus and system for bluetooth speaker
CN111200776A (en) * 2020-03-05 2020-05-26 北京声智科技有限公司 Audio playing control method and sound box equipment

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160016513A (en) * 2014-07-31 2016-02-15 삼성전자주식회사 Method and device for performing funtion of mobile device
WO2016018057A1 (en) 2014-07-31 2016-02-04 Samsung Electronics Co., Ltd. Method and device for providing function of mobile terminal
CN106534446B (en) * 2016-11-11 2020-03-27 努比亚技术有限公司 Mobile terminal dialing device and method
CN106657537A (en) * 2016-12-07 2017-05-10 努比亚技术有限公司 Terminal voice search call recording device and method
CN113329116B (en) * 2021-05-28 2023-07-07 北京小鹏汽车有限公司 Communication method of vehicles, server, vehicles and communication system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030020768A (en) * 2001-09-04 2003-03-10 주식회사 케이티 Description of automatic voice call connection service method by construction of personal phone book database using speech recognition and its related methods
US20060008256A1 (en) * 2003-10-01 2006-01-12 Khedouri Robert K Audio visual player apparatus and system and method of content distribution using the same
EP2373073B1 (en) * 2008-12-26 2016-11-09 Panasonic Intellectual Property Corporation of America Communication device
CN102355525A (en) * 2011-07-15 2012-02-15 深圳市路畅科技有限公司 Method for making call by driver in running process of vehicle
CN102566904B (en) * 2011-11-25 2016-08-03 同济大学 A kind of West Xia Dynasty's voice terminal based on West Xia Dynasty's literary composition holographic code exchange interface
CN102682771B (en) * 2012-04-27 2013-11-20 厦门思德电子科技有限公司 Multi-speech control method suitable for cloud platform
CN102752729A (en) * 2012-06-25 2012-10-24 华为终端有限公司 Reminding method, terminal, cloud server and system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103701972A (en) * 2013-11-28 2014-04-02 苏州佳世达电通有限公司 Method and system for intelligently replying voice message
CN104702789A (en) * 2015-03-11 2015-06-10 安徽声讯信息技术有限公司 Smart phone with voice control function and voice control method thereof
CN105788594A (en) * 2016-03-01 2016-07-20 江西掌中无限网络科技股份有限公司 Voice and meaning identification method and system of flow-free APP
CN106875939A (en) * 2017-01-13 2017-06-20 佛山市父母通智能机器人有限公司 To the Chinese dialects voice recognition processing method and intelligent robot of wide fluctuations
CN106952646A (en) * 2017-02-27 2017-07-14 深圳市朗空亿科科技有限公司 A kind of robot interactive method and system based on natural language
WO2019206184A1 (en) * 2018-04-24 2019-10-31 北京中科寒武纪科技有限公司 Allocation system, method and apparatus for machine learning, and computer device
CN108417215A (en) * 2018-04-27 2018-08-17 三星电子(中国)研发中心 A kind of playback equipment exchange method and device
WO2020082710A1 (en) * 2018-10-22 2020-04-30 深圳市冠旭电子股份有限公司 Voice interaction control method, apparatus and system for bluetooth speaker
CN109587664A (en) * 2018-11-14 2019-04-05 深圳市芯中芯科技有限公司 A kind of voice dialing system of edge calculations in conjunction with cloud computing
CN111200776A (en) * 2020-03-05 2020-05-26 北京声智科技有限公司 Audio playing control method and sound box equipment

Also Published As

Publication number Publication date
CN103281466B (en) 2016-10-12
TW201426532A (en) 2014-07-01
TWI497408B (en) 2015-08-21
CN103281466A (en) 2013-09-04

Similar Documents

Publication Publication Date Title
CN103095813A (en) 2013-05-08 Voice interaction system, mobile terminal device and voice communication method
CN107277754B (en) 2020-02-28 Bluetooth connection method and Bluetooth peripheral equipment
CN113572731B (en) 2022-08-26 Voice communication method, personal computer, terminal and computer readable storage medium
CN103077716A (en) 2013-05-01 Auxiliary starting device, voice control system and method thereof
EP3735646B1 (en) 2021-11-24 Using auxiliary device case for translation
CN207053716U (en) 2018-02-27 A kind of earphone
US8934886B2 (en) 2015-01-13 Mobile apparatus and method of voice communication
CN103517170A (en) 2014-01-15 Remote-control earphone with built-in cellular telephone module
CN113709906A (en) 2021-11-26 Wireless audio system, wireless communication method and device
CN205539985U (en) 2016-08-31 Take children's wrist -watch of CDMA2000 EVDO mobile communication
CN104219357A (en) 2014-12-17 Voice instruction network telephone and operation method thereof
CN104952457A (en) 2015-09-30 Device and method for digital hearing aiding and voice enhancing processing
TWI506618B (en) 2015-11-01 Mobile terminal device and method for automatically opening sound output interface of the mobile terminal device
CN105208514A (en) 2015-12-30 Slave equipment and master and slave equipment automatic matching system and method
CN109831766B (en) 2022-03-11 Data transmission method, Bluetooth equipment assembly and Bluetooth communication system
CN209659561U (en) 2019-11-19 Support the speaker and system of voice communication
CN208940167U (en) 2019-06-04 Bluetooth headset with voice arousal function
CN206162095U (en) 2017-05-10 Intelligence house robot with voice interaction function
CN206963017U (en) 2018-02-02 A kind of wireless microphone based on Voice command
CN107436747B (en) 2021-06-08 Terminal application program control method and device, storage medium and electronic equipment
CN203387581U (en) 2014-01-08 Radio with handset functions
CN103136924A (en) 2013-06-05 Multifunctional remote control unit
CN111399672B (en) 2021-02-05 Control method and system of intelligent mouse
CN110265022A (en) 2019-09-20 A kind of method and smart machine transmitting voice
CN201123033Y (en) 2008-09-24 Acoustic control cordless telephone device

Legal Events

Date Code Title Description
2013-05-08 C06 Publication
2013-05-08 PB01 Publication
2013-06-12 C10 Entry into substantive examination
2013-06-12 SE01 Entry into force of request for substantive examination
2013-07-17 C02 Deemed withdrawal of patent application after publication (patent law 2001)
2013-07-17 WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130508