Is it possible to have a backend with its own audio?

Let’s say I have a backend extension that can’t use playbin and needs a custom GStreamer pipeline. In my particular case, I want to treat a bluetooth a2dp source like a radio stream, but it could be anything (maybe you want to make a plugin that adds a reverb effect or uses audiokaraoke). Yes, it’s probably better to do some things like this by configuring the output, but that’ll only work if what you want to do is downstream of playbin, and if you always want it active.

I’m attempting to implement this by subclassing mopidy.audio.Audio and overriding Audio._setup_playbin(), Audio._setup_audio_sink(), and Audio.set_uri(). The backend constructor is always passed an actor proxy for the audio subsystem, but I’ve got mine ignoring that one and making its own instance. Then it sets up its own PlaybackProvider and passes it a proxy to the new audio instance.

I’m feeling like there’s going to be some real issues with this, possibly because core and other things are out there with their own reference to an Audio object that might not be the currently active one. Is there some additional utility in moving away from a singleton Audio object that might make it worth looking at moving the project in that direction?