Audio buffer

Any framework/solution to read the audio buffer?

AVAudioRecorder does not allow to handle the audio buffer… How about AudioUnits framework?

Hi jean-paul! Thank you for the suggestion … I’ll check it out!

I’ve checked some code built around AVfoundation to tap microphone using the AVAudioPCMBuffer (subclass of AVaudioBuffer)… The goal is to read microphone data without saving any audio file. How do I “translate it” to Xojo code?

In SWIFT CODE is something like the following


let audioEngine = AVAudioEngine()

let inputNode = audioEngine.inputNode
let sampleCount = 4096
let bus = 0
var samplesAsDoubles = Array(count: Int(sampleCount), repeatedValue: CDouble())
let frameLength = UInt32(sampleCount) 
inputNode.installTapOnBus(0, bufferSize:frameLength, format: inputNode.inputFormatForBus(bus), block: { (
    buffer: AVAudioPCMBuffer!,
    audioTime  : AVAudioTime!) in

    // Change incomming buffer size
    buffer.frameLength = UInt32(sampleCount)

    // Populate array with incomming audio samples
    for var i = 0; i < Int(buffer.frameLength); i++
    {
        samplesAsDoubles[i] = Double(buffer.floatChannelData.memory[i])
    }
    //Here I should call a function to process/display the info on the buffer samples

})

As far as I know

  1. AVAudioRecorder has metering functions for the input signal (such as peakPowerForChannel and averagePowerForChannel)
    but AVAudioRecorder is the “high level” way to do this!

  2. If you want the “low level” way then you need a tap on the engine’s inputNode

Some interesting XCODE (How do I “translate it” to Xojo code?) :

AVAudioEngine* sEngine = NULL;

  • (void)applicationDidBecomeActive:(UIApplication )application
    {
    /

    Restart any tasks that were paused (or not yet started) while the application was inactive. If the application was previously in the background, optionally refresh the user interface.
    */

[glView startAnimation];

AVAudioSession *audioSession = [AVAudioSession sharedInstance];

NSError* error = nil;
if (audioSession.isInputAvailable) [audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
if(error){
return;
}

[audioSession setActive:YES error:&error];
if(error){
return;
}

sEngine = [[AVAudioEngine alloc] init];

AVAudioMixerNode* mixer = [sEngine mainMixerNode];
AVAudioInputNode* input = [sEngine inputNode];
[sEngine connect:input to:mixer format:[input inputFormatForBus:0]];

__block NSTimeInterval start = 0.0;

// tap
[input installTapOnBus:0 bufferSize:4096 format:[input inputFormatForBus:0] block:^(AVAudioPCMBuffer* buffer, AVAudioTime* when) {

if (start == 0.0)
start = [AVAudioTime secondsForHostTime:[when hostTime]];

// Change incomming buffer size
NSLog(@“buffer frame length %d”, (int)buffer.frameLength);
buffer.frameLength = 4096;
UInt32 frames = 0;

// Populate array with incomming audio samples
for (UInt32 i = 0; i < buffer.audioBufferList->mNumberBuffers; i++) {
Float32 *data = buffer.audioBufferList->mBuffers.mData;
frames = buffer.audioBufferList->mBuffers.mDataByteSize / sizeof(Float32);

//Here I should call a function to process/display the info on the buffer samples

}
NSLog(@"%d frames are sent at %lf", (int) frames, [AVAudioTime secondsForHostTime:[when hostTime]] - start);
}];

[sEngine startAndReturnError:&error];
if (error) {
return;
}

}

The second one, translated into Xojo/iOSLib, would be:

[code]dim session as new AppleAVAudioSession
dim error as new AppleError
dim success as Boolean
if Session.InputAvailable then
success = Session.SetCategory(AppleAVAudioSession.kAVAudioSessionCategoryPlayAndRecord, error)
if not success then
break
end if

success = Session.SetActive(true, error)
if not success then
  break
end if

dim engine as new AppleAVAudioEngine
dim mixer as  AppleAVAudioMixerNode = engine.MainMixerNode
dim input as  AppleAVAudioInputNode = engine.InputNode
dim format as AppleAVAudioFormat = input.InputFormat(0)
engine.ConnectNodes (input, mixer, format)

dim block as new AppleBlock(AddressOf CallBackBlock)
input.InstallTap (0, 4096, format, block)

success = engine.start(error)
if not success then
  break
end if

end if[/code]

CallBackBlock is a method that takes the bufferPtr (to AppleAVAudioPCMBuffer) and TimePtr (for an AppleAVAudioTime).
But analyzing the result is not easy. You cannot do too much in a Xojo block that’s executed on a random thread, not even create the objects from their pointers (at least I found no way without running into Stack Overflows). I put a few proposals in iOSLib. Switch the start view to AVAudioView (it’s currently in the main project).

@Daniel: I think I found a solution.
Creating an object doesn’t work, but you can handle the declares. I tried the following in CallBackBlock:

[code] declare Function floatChannelData lib AVFoundationLibName selector “floatChannelData” (id as ptr) as ptr
declare Function frameLength lib AVFoundationLibName selector “frameLength” (id as ptr) as UInt32
dim cdata as ptr = floatChannelData (bufferptr)
dim frames as uint32 = frameLength (bufferptr)

system.DebugLog integer (frames).totext +": "+integer(cdata).ToText
dim mblock as new MutableMemoryBlock (cdata, frames)[/code]

And no crash anymore!

Hi Ulrich! I’ve been busy on another project for some time … I never tested your last suggestion… Any news about the audio buffer? You can see the app (written with Xojo) that I should improve with this function at the following link

https://itunes.apple.com/us/app/zephyrpro-wind-meter/id1049770301?mt=8

Actually is running fine but it has to save the mic data in a file and then read it on the fly (and of course it has some latency)

i’m sorry for the long delay, Daniel! I’ve been in Trier at the PiAndMore and had a lot of things to catch up with first after returning Monday night.
No, I haven’t found time to work on this stuff yet. You might have seen I published the first bit of OSXLib, and doing it I suddenly realized how I can separate the frameworks so you don’t have to install the whole library. I want to bring that feature into iOSLib too but still have to do a few things first. What I can do is try to port AVFoundation soon to OSXLib. I found it much easier to try out things in OSX (or macOS) where you don’t have to wait for the Simulator to start each time. As the frameworks are (almost or fully? Didn’t check yet) identical on both Apple systems, this could then be ported back to iOSLib.

Anyway: What I wrote last is valid. You can safely handle datatypes on a background thread. That’s where the external declares come in really handy because you can address them too from there. Although Norman will tell you thy end is nigh if you try. :wink: The officially supported solution in such cases would be to design a plug-in in another language ;(
So use with caution. It might stop working one day, but for the present time it works.

Björn told me you can even handle declared objects as long as you put all their code into pragma breakonexception false and pragma backgroundthreads false. I haven’t tried yet.

Hi Ulrich! Any news about the audio buffer? I’ve made no progress …Running out of time and I should write a new sound app … I need a working solution in order to handle the audio buffer …

Hi Daniel,
what did you try so far? I just made it to set up an AVAudioRecorder in OS X and have to translate some other classes (and some things are a bit different, like no AVAudioSession in macOS), but basically it should work like I’ve written: You attach a tap to a node, and then it depends on what you’re intending to do. You cannot do too much in the callback block because it will be executed on a background thread. So I think it would be wise to do the processing outside of it. You can forward the recorded bytes to a shared property of your window or whatever class suits best, and then you either have to employ a timer that does the processing once data has arrived or you could invoke a processing method on the main thread from inside the callback block.

Currently, iOSLib is a bit too messy for me to work with it until I cleaned it up, but I am sure we can create a working solution. If you like to, feel free to contact me privately with more information about what you want to do with the data.

And sorry for the delay: Just dropped back from a vacation earlier this week and had to catch up with a few things first.

Yes, it works, Daniel. I pushed the classes to OSXLib and found a bug:

AppleAVAudioEngine’s Start method must be

Public Function Start(byref anError as AppleError) as Boolean dim p as ptr = anError.Id dim result as boolean = startAndReturnError (id, p) anError = AppleError.MakefromPtr(p) return result End Function

And the anError parameter of the startAndReturnError external method of this class must be declared byref too.

I then installed a callbackblock method on the window:

Public Sub CallBackBlock(bufferPtr as Ptr, timePtr as Ptr) #pragma StackOverflowChecking false #Pragma BackgroundTasks false System.DebugLog integer(timePtr).ToText dim buf as new xojo.Core.MemoryBlock(bufferptr, 4096) testwindow.blocks.Append buf End Sub

and a shared property Blocks() As Xojo.Core.Memoryblock.

After recording a few seconds with engine.start, I stopped the project and had 10 arrays of memoryblocks attached to this property and the same number of timeptr values in my console log.

Please tell me if it doesn’t work for you after these modifications.

EDIT: You could simply forward the bufferptr and timePtr to shared properties and have a thread or timer look for new entries where you could create the AudioBuffers from the ptr values and process them. Should be faster than wrapping them in a memoryblock where their data is not yet decoded.

Hi Ulrich! Within next week I’ll try your new suggestion… thanks!

Great! Let me know if something won’t do.
I know you are looking for an iOS solution, but anyway: I added a AVAudio demo window to OSXLib that shows how to record with an AVAudioRecorder or how to forward the sample results from an Audio Engine. With the modification to the iOSLib code I mentioned above you can use them as a template for iOS.

What I didn’t mention above: Important is to retain the ptrs that are forwarded to the callbackblock. Therefore it looks like this now:

#pragma StackOverflowChecking false #Pragma BackgroundTasks false AVAudioWindow.BufferPtrs.Append FoundationFrameWork.retain (bufferptr) AVAudioWindow.TimerPtrs.Append FoundationFrameWork.retain (timePtr)

BufferPtrs() And TimerPtrs() are shared Arrays of Ptr.

Then you can build their objects from them. I extended NSObject’s Ptr constructor to take ownership in those cases. You should do so too because otherwise the thing will start to leak. That’s how the EngineUpdate timer method creates them now:

if BufferPtrs.Ubound > -1 and TimerPtrs.Ubound > -1 then dim buf as ptr = BufferPtrs(0) dim time as ptr = TimerPtrs(0) dim buffer as new AppleAVAudioPCMBuffer(buf, true, false) dim buffertime as new AppleAVAudioTime(time, true, false) BufferPtrs.Remove(0) TimerPtrs.remove(0) TextArea1.AppendText buffer.AudioBufferList.Size.ToText+" Samples received at "+buffertime.HostTime.ToText+EndOfLine end if #pragma unused t

with the new optional AppleObject constructor

Public Sub Constructor(aPtr as Ptr, takeOwnership as Boolean, own as Boolean) mid = if (own, retain(aptr), aptr) MHasOwnership = takeOwnership End Sub

Hi Daniel,
there were some bugs in the AVAudioSession in iOSLib which are now fixed in the new version at https://github.com/UBogun/Xojo-AppleLib.

You can now use the AppleAVAudioSession class by dragging it on your layout directly. I put an example into the splashscreen that requests a RecordingPermission, but no recording functionality yet. You should be able to use the code from OSXLib for that.

The properties, where available, don’t have a separate Set… method anymore (or rather it is wrapped into a setter method). You can try to write them too but should use a try/catch clause because they raise ErrorExceptions in case they fail.
Don’t set the properties in the Open event, wait for the request handler to fire or write them delayed. I had to disable the setters during initialization because the Inspector properties, though not enabled, try to set everything to 0 values else.

Addition: I added a view that does basically the same as the Audio Engine part of the OSXLib demo. You should be able to extend the timer method that currently only shows that new samples have been received.