VoxForge
Click the 'Add' link to add a comment to this page.
Note: You need to be logged in to add a comment!
Our service is about: * online editor of subtitles for youtube videos * webpage for create captions for video on youtube * making the youtube subtitles
http//:www.captionsmaker.com
This is related to the recognizer used by Palaver.
I tested this recognizer on 10 languages. It works well on the surface, but the errors it often makes serious errors that impede understanding.
The test (including audio files) and Java code is at
http://www.jaivox.com/googlelangs.html
The open source GNU/Linux speech recognition program that uses Google's voice APIs on the back-end is now called Palaver. Google+ Community, installation video, and Github links are here:
https://github.com/JamezQ/Palaver
Palaver Speech Recognition Community on Google +:
https://plus.google.com/communities/117295420902112738135
Information about Palaver
http://www.youtube.com/watch?v=a5-aolmt0OE
How to install Ubuntu Voice Recognition (Palaver by James McClain)
http://www.youtube.com/watch?v=pxom292XW_gof
There was a demonstration of Dragonfly at PyCon 2013.
https://www.youtube.com/watch?v=8SkdfdXWYaI
To skip straight to Tavis' demo of Dragonfly: 8:34
https://www.youtube.com/watch?v=8SkdfdXWYaI#t=8m34s
“Tavis Rudd
Tavis Rudd is a coder/sysadmin who talks to his computer.
Presentations
Using Python to Code by Voice”.
"Tavis Rudd - Two years ago I developed a case of Emacs Pinkie (RSI) so severe my hands went numb and I could no longer type or work. Desperate, I tried voice recognition. At first programming with it was painfully slow but, as I couldn't type, I persevered. After several months of vocab tweaking and duct-tape coding in Python and Emacs Lisp, I had a system that enabled me to code faster and more efficiently by voice than I ever had by hand. In a fast-paced live demo, I will create a small system using Python, plus a few other languages for good measure, and deploy it without touching the keyboard. The demo gods will make a scheduled appearance. I hope to convince you that voice recognition is no longer a crutch for the disabled or limited to plain prose. It's now a highly effective tool that could benefit all programmers.".
"Tavis Rudd - Two years ago I developed a case of Emacs Pinkie (RSI) so severe my hands went numb and I could no longer type or work. Desperate, I tried voice recognition. At first programming with it was painfully slow but, as I couldn't type, I persevered. After several months of vocab tweaking and duct-tape coding in Python and Emacs Lisp, I had a system that enabled me to code faster and more efficiently by voice than I ever had by hand.
In a fast-paced live demo, I will create a small system using Python, plus a few other languages for good measure, and deploy it without touching the keyboard. The demo gods will make a scheduled appearance. I hope to convince you that voice recognition is no longer a crutch for the disabled or limited to plain prose. It's now a highly effective tool that could benefit all programmers.".
Canonical's new tablet operating system Ubuntu on Tablets includes Voice Control in the HUD.
From Ted Gould's HUD 2.0 page:
With the HUD we realized that we had a relatively small data set, and so it would be possible to get reasonable voice recognition using the resources available in the device. [...]
We built the voice feature around two different Open Source voice engines: Pocket Sphinx and Julius. While we started with Pocket Sphinx we weren't entirely happy with it's performance, and found Julius to start faster and provide better results. Unfortunately Julius is licensed with the 4-clause BSD license, putting it in multiverse and making it so that we can't link to it in the Ubuntu archive version of HUD.
Link to the source: http://bazaar.launchpad.net/~indicator-applet-developers/hud/phablet/files/head:/src/
Edit: I should mention that they are working with the CMU sphinx group to resolve the Pocket Sphinx performance issues they experienced - likely just a configuration issue.
Chrome version 25 now supports speech recognition using the Web Speech API and Google's server-based speech recognition.
From the W3 Web Speech API specification page:
This specification defines a JavaScript API to enable web developers to incorporate speech recognition and synthesis into their web pages. It enables developers to use scripting to generate text-to-speech output and to use speech recognition as an input for forms, continuous dictation and control. The JavaScript API allows web pages to control activation and timing and to handle results and alternatives.
The Google API uses their (hidden) speech recognizer.
I have created a simple (one Java file) voice-based command handler for Linux using Sphinx. The current example works for only 10 voice commands, but this can be extended by adding more commands to a text file. It is at http://www.jaivox.com/speechcommand.html
Voice recognition on Ubuntu!
A small test to show the ability to use google's voice recognition from the ubuntu desktop. In theory this could be made into something like an ubuntu desktop assistant.This is just a test, and both accuracy and speed could be a lot better if this were written as a real app rather than a script.
http://www.youtube.com/watch?v=uM2Yb-PwP6o
Voice Recognition on Ubuntu, Part 2!
Before I showed you the ability to do voice recognition with google's servers without chrome. Now I built a working demo to show you possible uses of an ubuntu voice assistant.
http://www.youtube.com/watch?v=0kMWto5enlM
I have created a system that empowers Ubuntu Desktops with dictation from an Android app.
The site (with forum) is at http://ubuntuspeechinput.zymichost.com/
Selling well on Google play, with lots of positive feedback, this is a simple solution for those wanting to dictate to many Ubuntu programs without the hassle of configuring soundcards or having to spend hours training any software.
From this article: Ubuntu rips up drop-down menus
Ubuntu is set to replace the 30-year-old computer menu system with a “Head-Up Display” that allows users to simply type or speak menu commands.[...]Ubuntu plans to integrate voice recognition with HUD in future releases, allowing users to dictate commands to their PC.
HUD is described as follows:
Basically rather than navigating menus to find an application function, just tap ALT and type what you want the application to do.
Some fuzzy logic matches what you typed with the application menus, and the most relevant commands are displayed. To complete the action just press return, or select one of the alternative functions presented in the auto-complete.
From Mark Shuttleworth's blog:
Voice is the natural next step
Searching is fast and familiar, especially once we integrate voice recognition, gesture and touch. We want to make it easy to talk to any application, and for any application to respond to your voice. The full integration of voice into applications will take some time. We can start by mapping voice onto the existing menu structures of your apps. And it will only get better from there.