I'm using googles text-to-speech api on the backend, and sending to the frontend in the form of an ArrayBuffer. It then gets converted to a url that played with audio.play() This is working on chrome on mobile, windows, and macOS, but no luck in Safari.
I've seen a few threads similar to this one, and tried a few of the answers with no luck.
I've tried creating the audioPlayer when the component is created, and just changing the src in playVoice
playVoice is just called from a button onClick
The frontend functions look like:
const playVoice = (text: string) => {
getSpeech(text, sourceLanguage, "NEUTRAL").then((res) => {
const audioPlayer = new Audio();
audioPlayer.pause();
audioPlayer.currentTime = 0;
audioPlayer.src = convertAudio([res.data]);
audioPlayer.play();
});
};
with getSpeech being an axios get request:
export const getSpeech = async (
text: string,
languageCode: string,
voice: VoiceTypes
) => {
return await axios({
method: "GET",
url: "/api/speech/",
responseType: "blob",
params: {
text,
languageCode,
voice,
},
});
};
and convertAudio looks like
export const convertAudio = (buffer: ArrayBuffer[]): string => {
return URL.createObjectURL(new Blob(buffer));
};
My backend looks something like
const textToSpeech = require("@google-cloud/text-to-speech");
const asyncHandler = require("express-async-handler");
const stream = require("stream");
const client = new textToSpeech.TextToSpeechClient(process.env.SERVICE_ACCOUNT);
const getVoice = asyncHandler(async (req, res) => {
const { text, languageCode, voice } = req.query;
const request = {
input: { text },
voice: { languageCode, ssmlGender: voice },
audioConfig: { audioEncoding: "MP3" },
};
res.set({
"Content-Type": "audio/mpeg",
"Transfer-Encoding": "chunked",
});
const [response] = await client.synthesizeSpeech(request);
const bufferStream = new stream.PassThrough();
bufferStream.end(Buffer.from(response.audioContent));
bufferStream.pipe(res);
});
from Audio not playing in Safari for macOS/iOS
No comments:
Post a Comment