@lucyli 从麦克风读取声音然后识别, 运行这个脚本就可以 https://github.com/alphacep/vosk-api/blob/master/python/example/test_microphone.py
@jakob I find the test_microphone.py example a good place to start, sample rate and format are handled, so I don't need to worry about converting codec or getting timing wrong. I think the first time I talked into the microphone, words got recognized one by one with little latency, also the partial result sometimes changes to make it more likely to be a English sentence
@jakob I was surprised to find that DeepSpeech doesn't seem to be using of a good language model, as it frequently produces obscure words and even weird combination of letters. At last, I found vosk from alphacep. It's quite accurate even when used by a non-native speaker
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!