FunASR have implemented the following paper code
- FunASR: A Fundamental End-to-End Speech Recognition Toolkit, INTERSPEECH 2023
- BAT: Boundary aware transducer for memory-efficient and low-latency ASR, INTERSPEECH 2023
- Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition, INTERSPEECH 2022
- E-branchformer: Branchformer with enhanced merging for speech recognition, SLT 2022
- Branchformer: Parallel mlp-attention architectures to capture local and global context for speech recognition and understanding, ICML 2022
- Universal ASR: Unifying Streaming and Non-Streaming ASR Using a Single Encoder-Decoder Model, arXiv preprint arXiv:2010.14099, 2020
- San-m: Memory equipped self-attention for end-to-end speech recognition, INTERSPEECH 2020
- Streaming Chunk-Aware Multihead Attention for Online End-to-End Speech Recognition, INTERSPEECH 2020
- Conformer: Convolution-augmented Transformer for Speech Recognition, INTERSPEECH 2020
- Sequence-to-sequence learning with Transducers, NIPS 2016
- MFCCA:Multi-Frame Cross-Channel attention for multi-speaker ASR in Multi-party meeting scenario, ICASSP 2022
- CT-Transformer: Controllable time-delay transformer for real-time punctuation prediction and disfluency detection, ICASSP 2018
- Attention Is All You Need, NEURIPS 2017