Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.
That kind of device is still a dream.
Speech recognition requires memory and processing power.
If you want to recognize a limited sample of words, it can be done
basically with a chip (depends on the chip). Examples: if you want
a hand-free microscope to keep your two hands for bio work,
then you can implement something that recognizes "left, right,
up, down, foreword, backward, stop" so that you can freely move and
focus the point of interest. If you want it to recognize a single
person, it's quite simple. Difficulity increases with:
- Multi-user;
- Size of the vocabulary.
For your info, the latest iPhone uses servers for speech recognition.
This means that even huge teams like Apple cannot do it within the
phone itself. They use powerful servers to do that.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.