背景
相信我们大家在刚开始学习一门语言的时候都有过听写,现在的小学生学语文的时候一项重要的课后作业就是听写课文中的生词,很多家长们都有这方面的经历。不过一方面这种读单词的动作相对简单,另一方面家长的时间也很宝贵,现在市场上出现了很多xx课后听写的语音,这些播讲人将语文教材上的课后听写单词录好,给家长下载使用,不过这种录音不够灵活,如果老师今天额外留了几道不是课后习题中的单词,这部分的录音就不能满足家长和孩子们的需要。本文就介绍了一个使用我们ML kit 的通用文本识别功能和语音合成功能共同实现自动语音播报APP,只需要对听写的单词或者课文拍照,然后就能自动播报照片中的文字,播报的音色、音调都可以调整。
开发前准备
打开AndroidStudio项目级build.gradle文件
在allprojects ->repositories里面配置HMS SDK的maven仓地址
<code class="java">allprojects { repositories { google() jcenter() maven {url 'http://developer.huawei.com/repo/'} } }</code>
在buildscript->repositories里面配置HMS SDK的maven仓地址
<code class="java">buildscript { repositories { google() jcenter() maven {url 'http://developer.huawei.com/repo/'} } }</code>
在buildscript->repositories里面配置HMS SDK的maven仓地址
<code class="java">buildscript { repositories { google() jcenter() maven {url 'http://developer.huawei.com/repo/'} } }</code>
在buildscript->dependencies中,配置AGC插件
<code class="java">dependencies { classpath 'com.huawei.agconnect:agcp:1.2.1.301' }</code>
添加编译依赖
打开应用级的build.gradle文件
集成SDK
<code class="java">dependencies{ implementation 'com.huawei.hms:ml-computer-voice-tts:1.0.4.300' implementation 'com.huawei.hms:ml-computer-vision-ocr:1.0.4.300' implementation 'com.huawei.hms:ml-computer-vision-ocr-cn-model:1.0.4.300' }</code>
应用ACG插件,添加在文件头即可
<code class="java">apply plugin: 'com.huawei.agconnect'</code>
指定权限和特性:在AndroidManifest.xml中进行声明
<code class="java"><uses-permission android:name="android.permission.CAMERA" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-feature android:name="android.hardware.camera" /> <uses-feature android:name="android.hardware.camera.autofocus" /></code>
作业朗读代码关键步骤
主要有两个功能,一个是识别作业文本,一个是朗读作业,通过OCR+TTS实现作业朗读,拍照后点击播放即可朗读。
- 动态权限申请
<code class="java">private static final int PERMISSION_REQUESTS = 1; @Override public void onCreate(Bundle savedInstanceState) { // Checking camera permission if (!allPermissionsGranted()) { getRuntimePermissions(); } }</code>
- 启动朗读界面
<code class="java">public void takePhoto(View view) { Intent intent = new Intent(MainActivity.this, ReadPhotoActivity.class); startActivity(intent); }</code>
- 在onCreate()法中调用createLocalTextAnalyzer()创建端侧文本识别器
<code class="java">private void createLocalTextAnalyzer() { MLLocalTextSetting setting = new MLLocalTextSetting.Factory() .setOCRMode(MLLocalTextSetting.OCR_DETECT_MODE) .setLanguage("zh") .create(); this.textAnalyzer = MLAnalyzerFactory.getInstance().getLocalTextAnalyzer(setting); }</code>
- 在onCreate()法中调用createTtsEngine ()创建语音合成引擎,并构建语音合成回调,用于处理语音合成结果,将语音合成回调传入新建的语音合成引擎中
<code class="java">private void createTtsEngine() { MLTtsConfig mlConfigs = new MLTtsConfig() .setLanguage(MLTtsConstants.TTS_ZH_HANS) .setPerson(MLTtsConstants.TTS_SPEAKER_FEMALE_ZH) .setSpeed(0.2f) .setVolume(1.0f); this.mlTtsEngine = new MLTtsEngine(mlConfigs); MLTtsCallback callback = new MLTtsCallback() { @Override public void onError(String taskId, MLTtsError err) { } @Override public void onWarn(String taskId, MLTtsWarn warn) { } @Override public void onRangeStart(String taskId, int start, int end) { } @Override public void onEvent(String taskId, int eventName, Bundle bundle) { if (eventName == MLTtsConstants.EVENT_PLAY_STOP) { if (!bundle.getBoolean(MLTtsConstants.EVENT_PLAY_STOP_INTERRUPTED)) { Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.read_finish, Toast.LENGTH_SHORT).show(); } } } }; mlTtsEngine.setTtsCallback(callback); }</code>
- 设置读取照片、拍照和朗读按钮
<code class="java">this.relativeLayoutLoadPhoto.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { ReadPhotoActivity.this.selectLocalImage(ReadPhotoActivity.this.REQUEST_CHOOSE_ORIGINPIC); } }); this.relativeLayoutTakePhoto.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { ReadPhotoActivity.this.takePhoto(ReadPhotoActivity.this.REQUEST_TAKE_PHOTO); } });</code>
- 在拍照和读取照片的回调当中启动文本识别startTextAnalyzer()
<code class="java">private void startTextAnalyzer() { if (this.isChosen(this.originBitmap)) { MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create(); Task<MLText> task = this.textAnalyzer.asyncAnalyseFrame(mlFrame); task.addOnSuccessListener(new OnSuccessListener<MLText>() { @Override public void onSuccess(MLText mlText) { // Transacting logic for segment success. if (mlText != null) { ReadPhotoActivity.this.remoteDetectSuccess(mlText); } else { ReadPhotoActivity.this.displayFailure(); } } }).addOnFailureListener(new OnFailureListener() { @Override public void onFailure(Exception e) { // Transacting logic for segment failure. ReadPhotoActivity.this.displayFailure(); return; } }); } else { Toast.makeText(this.getApplicationContext(), R.string.please_select_picture, Toast.LENGTH_SHORT).show(); return; } }</code>
- 识别成功后,点击播放按钮即可开始播放
<code class="java">this.relativeLayoutRead.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if (ReadPhotoActivity.this.sourceText == null) { Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.please_select_picture, Toast.LENGTH_SHORT).show(); } else { ReadPhotoActivity.this.mlTtsEngine.speak(sourceText, MLTtsEngine.QUEUE_APPEND); Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.read_start, Toast.LENGTH_SHORT).show(); } } });</code>