blog.esjworks.com/post/2011/03/1 ... he-box-III
blog.esjworks.com/post/2011/03/1 ... -the-box-V
some related writings.
from the Google talk document I gave earlier,
"""The transformation happens because there is a database with both spoken name and string name forms. When there is no match in the database, the user is then asked to fill in the match. Obviously, in the beginning, there’ll be a lot of data entry of symbol names for spoken names; but over time it should improve to the point where you hardly ever enter a symbol name.
"""This technique has one additional advantage. It gives the disabled developer the ability to work with a team’s coding standards and pre-existing symbols. Now disabled developers can integrate fully, both technically and socially, within a development team.
It's really quite simple. The major components are the code to identify a spoken name/symbol name, look it up in a database and return a value. There are some minor complications with handling the initialization case but again read the document, it's all spelled out there. This is really a very simple concept/process.
one of the reasons I rejected the automatic translator out, at least at first, is because there are different conversion algorithms depending on the context in many languages and programming teams. The simple translation allows the user complete control over the translation. in the future, we can look at semi automatic translations but the mechanism of interacting with the user will be basic the same as the unknown case except the predictive form is used in place of unknown.
One of the problems with programming by speech has been overthinking the problem. What needs to happen is lots of trials and failures with simple stuff. I have no guarantees that this technique will work but I really need to try and live with it for a bit before I will know. Once that's done, combining it with the variable name creation also outlined my blog would make an interesting follow on experiment.
Yes, folks have to listen to my experience in for big chunks of project and it would help if people start using speech recognition as well to understand the world I live in. I'm sorry but you also need to live with my subjective evaluation of the technique when I live with it.
The problem with large numbers of functions variables etc. is something I'm quite aware of and is leading me towards the "disambiguation through reduction of scope" principle. One way to implement that is to look for the nearest feature that satisfies the query, not unlike find operation. Another technique is to find everything that's visible on the screen a friend and maybe the entire file) assign a little pop-up number and then the grammar would expect the user to say the number. The system would then perform the operation requested on the numbered item. But again, I want are to try the simple stuff first
don't worry about that part. The grammar -- routines are something I have a lot of control over. I don't have control over is how to get at the internal buffers of the editor to do a "Select-and-Say" functionality.