Drawing waveform with AVAssetReader

IphoneObjective CCore AudioIpodWaveform

Iphone Problem Overview


I reading song from iPod library using assetUrl (in code it named audioUrl) I can play it many ways, I can cut it, I can make some precessing with this but... I really don't understand what I gonna do with this CMSampleBufferRef to get data for drawing waveform! I need info about peak values, how I can get it this (maybe another) way?

	AVAssetTrack * songTrack = [audioUrl.tracks objectAtIndex:0];
	AVAssetReaderTrackOutput * output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:nil];
	[reader addOutput:output];
	[output release];
	
	NSMutableData * fullSongData = [[NSMutableData alloc] init];
	[reader startReading];
	
	while (reader.status == AVAssetReaderStatusReading){
		
		AVAssetReaderTrackOutput * trackOutput = 
		(AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0];
		
		CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer];
		
		if (sampleBufferRef){/* what I gonna do with this? */}

Please help me!

Iphone Solutions


Solution 1 - Iphone

I was searching for a similar thing and decided to "roll my own." I realize this is an old post, but in case anyone else is in search of this, here is my solution. it is relatively quick and dirty and normalizes the image to "full scale". the images it creates are "wide" ie you need to put them in a UIScrollView or otherwise manage the display.

this is based on some answers given to https://stackoverflow.com/q/4796643/830899">this question

Sample Output

sample waveform

EDIT: I have added a logarithmic version of the averaging and render methods, see the end of this message for the alternate version & comparison outputs. I personally prefer the original linear version, but have decided to post it, in case someone can improve on the algorithm used.

You'll need these imports:

#import <MediaPlayer/MediaPlayer.h>
#import <AVFoundation/AVFoundation.h>

First, a generic rendering method that takes a pointer to averaged sample data,
and returns a UIImage. Note these samples are not playable audio samples.

-(UIImage *) audioImageGraph:(SInt16 *) samples
                normalizeMax:(SInt16) normalizeMax
                 sampleCount:(NSInteger) sampleCount 
                channelCount:(NSInteger) channelCount
                 imageHeight:(float) imageHeight {
    
    CGSize imageSize = CGSizeMake(sampleCount, imageHeight);
    UIGraphicsBeginImageContext(imageSize);
    CGContextRef context = UIGraphicsGetCurrentContext();
    
    CGContextSetFillColorWithColor(context, [UIColor blackColor].CGColor);
    CGContextSetAlpha(context,1.0);
    CGRect rect;
    rect.size = imageSize;
    rect.origin.x = 0;
    rect.origin.y = 0;
    
    CGColorRef leftcolor = [[UIColor whiteColor] CGColor];
    CGColorRef rightcolor = [[UIColor redColor] CGColor];
    
    CGContextFillRect(context, rect);
    
    CGContextSetLineWidth(context, 1.0);
    
    float halfGraphHeight = (imageHeight / 2) / (float) channelCount ;
    float centerLeft = halfGraphHeight;
    float centerRight = (halfGraphHeight*3) ; 
    float sampleAdjustmentFactor = (imageHeight/ (float) channelCount) / (float) normalizeMax;
    
    for (NSInteger intSample = 0 ; intSample < sampleCount ; intSample ++ ) {
        SInt16 left = *samples++;
        float pixels = (float) left;
        pixels *= sampleAdjustmentFactor;
        CGContextMoveToPoint(context, intSample, centerLeft-pixels);
        CGContextAddLineToPoint(context, intSample, centerLeft+pixels);
        CGContextSetStrokeColorWithColor(context, leftcolor);
        CGContextStrokePath(context);
        
        if (channelCount==2) {
            SInt16 right = *samples++;
            float pixels = (float) right;
            pixels *= sampleAdjustmentFactor;
            CGContextMoveToPoint(context, intSample, centerRight - pixels);
            CGContextAddLineToPoint(context, intSample, centerRight + pixels);
            CGContextSetStrokeColorWithColor(context, rightcolor);
            CGContextStrokePath(context); 
        }
    }
    
    // Create new image
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    
    // Tidy up
    UIGraphicsEndImageContext();   
    
    return newImage;
}

Next, a method that takes a AVURLAsset, and returns PNG image data

- (NSData *) renderPNGAudioPictogramForAsset:(AVURLAsset *)songAsset {
    
    NSError * error = nil;
    AVAssetReader * reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];
    AVAssetTrack * songTrack = [songAsset.tracks objectAtIndex:0];
    
    NSDictionary* outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys:
                                        [NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
                                        //     [NSNumber numberWithInt:44100.0],AVSampleRateKey, /*Not Supported*/
                                        //     [NSNumber numberWithInt: 2],AVNumberOfChannelsKey,    /*Not Supported*/
                                        [NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved,
                                        nil];
    
    AVAssetReaderTrackOutput* output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:outputSettingsDict];
    
    [reader addOutput:output];
    [output release];
    
    UInt32 sampleRate,channelCount;
    
    NSArray* formatDesc = songTrack.formatDescriptions;
    for(unsigned int i = 0; i < [formatDesc count]; ++i) {
        CMAudioFormatDescriptionRef item = (CMAudioFormatDescriptionRef)[formatDesc objectAtIndex:i];
        const AudioStreamBasicDescription* fmtDesc = CMAudioFormatDescriptionGetStreamBasicDescription (item);
        if(fmtDesc ) {
            
            sampleRate = fmtDesc->mSampleRate;
            channelCount = fmtDesc->mChannelsPerFrame;
            
            //    NSLog(@"channels:%u, bytes/packet: %u, sampleRate %f",fmtDesc->mChannelsPerFrame, fmtDesc->mBytesPerPacket,fmtDesc->mSampleRate);
        }
    }
    
    UInt32 bytesPerSample = 2 * channelCount;
    SInt16 normalizeMax = 0;
    
    NSMutableData * fullSongData = [[NSMutableData alloc] init];
    [reader startReading];
    
    UInt64 totalBytes = 0;         
    SInt64 totalLeft = 0;
    SInt64 totalRight = 0;
    NSInteger sampleTally = 0;
    
    NSInteger samplesPerPixel = sampleRate / 50;
    
    while (reader.status == AVAssetReaderStatusReading){
        
        AVAssetReaderTrackOutput * trackOutput = (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0];
        CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer];
        
        if (sampleBufferRef){
            CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBufferRef);
            
            size_t length = CMBlockBufferGetDataLength(blockBufferRef);
            totalBytes += length;
         
            NSAutoreleasePool *wader = [[NSAutoreleasePool alloc] init];
            
            NSMutableData * data = [NSMutableData dataWithLength:length];
            CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, data.mutableBytes);
         
            SInt16 * samples = (SInt16 *) data.mutableBytes;
            int sampleCount = length / bytesPerSample;
            for (int i = 0; i < sampleCount ; i ++) {
                
                SInt16 left = *samples++;
                totalLeft  += left;
                
                SInt16 right;
                if (channelCount==2) {
                    right = *samples++;
                    totalRight += right;
                }
                
                sampleTally++;
                
                if (sampleTally > samplesPerPixel) {
                    
                    left  = totalLeft / sampleTally; 
                    
                    SInt16 fix = abs(left);
                    if (fix > normalizeMax) {
                        normalizeMax = fix;
                    }
                    
                    [fullSongData appendBytes:&left length:sizeof(left)];
                    
                    if (channelCount==2) {
                        right = totalRight / sampleTally; 
                        
                        SInt16 fix = abs(right);
                        if (fix > normalizeMax) {
                            normalizeMax = fix;
                        }
                        
                        [fullSongData appendBytes:&right length:sizeof(right)];
                    }
                    
                    totalLeft   = 0;
                    totalRight  = 0;
                    sampleTally = 0;
                }
            }
            
           [wader drain];
            
            CMSampleBufferInvalidate(sampleBufferRef);
            CFRelease(sampleBufferRef);
        }
    }
   
    NSData * finalData = nil;
    
    if (reader.status == AVAssetReaderStatusFailed || reader.status == AVAssetReaderStatusUnknown){
        // Something went wrong. return nil

        return nil;
    }
    
    if (reader.status == AVAssetReaderStatusCompleted){
        
        NSLog(@"rendering output graphics using normalizeMax %d",normalizeMax);
        
        UIImage *test = [self audioImageGraph:(SInt16 *) 
                         fullSongData.bytes 
                                 normalizeMax:normalizeMax 
                                  sampleCount:fullSongData.length / 4 
                                 channelCount:2
                                  imageHeight:100];
        
        finalData = imageToData(test);
    }        
    
    [fullSongData release];
    [reader release];
    
    return finalData;
}

Advanced Option: Finally, if you want to be able to play the audio using AVAudioPlayer, you'll need to cache it to your apps's bundle cache folder. Since I was doing that, i decided to cache the image data also, and wrapped the whole thing into a UIImage category. you need to include https://bitbucket.org/artgillespie/tslibraryimport/src">this open source offering to extract the audio, and some code from http://objcolumnist.com/2011/05/03/performing-a-block-of-code-on-a-given-thread/">here</a> to handle some background threading features.

first, some defines, and a few generic class methods for handling path names etc

//#define imgExt @"jpg"
//#define imageToData(x) UIImageJPEGRepresentation(x,4)

#define imgExt @"png"
#define imageToData(x) UIImagePNGRepresentation(x)

+ (NSString *) assetCacheFolder  {
    NSArray  *assetFolderRoot = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES);
    return [NSString stringWithFormat:@"%@/audio", [assetFolderRoot objectAtIndex:0]];
 }

+ (NSString *) cachedAudioPictogramPathForMPMediaItem:(MPMediaItem*) item {
    NSString *assetFolder = [[self class] assetCacheFolder];
    NSNumber * libraryId = [item valueForProperty:MPMediaItemPropertyPersistentID];
    NSString *assetPictogramFilename = [NSString stringWithFormat:@"asset_%@.%@",libraryId,imgExt];
    return [NSString stringWithFormat:@"%@/%@", assetFolder, assetPictogramFilename];
}

+ (NSString *) cachedAudioFilepathForMPMediaItem:(MPMediaItem*) item {
    NSString *assetFolder = [[self class] assetCacheFolder];
 
    NSURL    * assetURL = [item valueForProperty:MPMediaItemPropertyAssetURL];
    NSNumber * libraryId = [item valueForProperty:MPMediaItemPropertyPersistentID];
    
    NSString *assetFileExt = [[[assetURL path] lastPathComponent] pathExtension];
    NSString *assetFilename = [NSString stringWithFormat:@"asset_%@.%@",libraryId,assetFileExt];
    return [NSString stringWithFormat:@"%@/%@", assetFolder, assetFilename];
}

+ (NSURL *) cachedAudioURLForMPMediaItem:(MPMediaItem*) item {
    NSString *assetFilepath = [[self class] cachedAudioFilepathForMPMediaItem:item];
    return [NSURL fileURLWithPath:assetFilepath];
}

Now the init method that does "the business"

- (id) initWithMPMediaItem:(MPMediaItem*) item 
           completionBlock:(void (^)(UIImage* delayedImagePreparation))completionBlock  {
        
    NSFileManager *fman = [NSFileManager defaultManager];
    NSString *assetPictogramFilepath = [[self class] cachedAudioPictogramPathForMPMediaItem:item];

    if ([fman fileExistsAtPath:assetPictogramFilepath]) {
  
        NSLog(@"Returning cached waveform pictogram: %@",[assetPictogramFilepath lastPathComponent]);
        
        self = [self initWithContentsOfFile:assetPictogramFilepath];
        return self;
    }
    
    NSString *assetFilepath = [[self class] cachedAudioFilepathForMPMediaItem:item];
    
    NSURL *assetFileURL = [NSURL fileURLWithPath:assetFilepath];
    
    if ([fman fileExistsAtPath:assetFilepath]) {
        
        NSLog(@"scanning cached audio data to create UIImage file: %@",[assetFilepath lastPathComponent]);
        
        [assetFileURL retain];
        [assetPictogramFilepath retain];
        
        [NSThread MCSM_performBlockInBackground: ^{
            
            AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:assetFileURL options:nil];
            NSData *waveFormData = [self renderPNGAudioPictogramForAsset:asset]; 
            
            [waveFormData writeToFile:assetPictogramFilepath atomically:YES];
            
            [assetFileURL release];
            [assetPictogramFilepath release];
            
            if (completionBlock) {
                
                [waveFormData retain];
                [NSThread MCSM_performBlockOnMainThread:^{
               
                    UIImage *result = [UIImage imageWithData:waveFormData];
                    
                    NSLog(@"returning rendered pictogram on main thread (%d bytes %@ data in UIImage %0.0f x %0.0f pixels)",waveFormData.length,[imgExt uppercaseString],result.size.width,result.size.height);
                    
                    completionBlock(result);
                    
                    [waveFormData release];
                }];
            }
        }];
         
        return nil;
   
    } else {
        
        NSString *assetFolder = [[self class] assetCacheFolder];
        
        [fman createDirectoryAtPath:assetFolder withIntermediateDirectories:YES attributes:nil error:nil];
        
        NSLog(@"Preparing to import audio asset data %@",[assetFilepath lastPathComponent]);
        
        [assetPictogramFilepath retain];
        [assetFileURL retain];
        
        TSLibraryImport* import = [[TSLibraryImport alloc] init];
        NSURL    * assetURL = [item valueForProperty:MPMediaItemPropertyAssetURL];
        
         [import importAsset:assetURL toURL:assetFileURL completionBlock:^(TSLibraryImport* import) {
            //check the status and error properties of
            //TSLibraryImport
            
            if (import.error) {
                
                NSLog (@"audio data import failed:%@",import.error);

            } else{
                NSLog (@"Creating waveform pictogram file: %@", [assetPictogramFilepath lastPathComponent]);
                AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:assetFileURL options:nil];
                NSData *waveFormData = [self renderPNGAudioPictogramForAsset:asset]; 
                
                [waveFormData writeToFile:assetPictogramFilepath atomically:YES];
                
                if (completionBlock) {
                     [waveFormData retain];
                    [NSThread MCSM_performBlockOnMainThread:^{
                    
                        UIImage *result = [UIImage imageWithData:waveFormData];
                        NSLog(@"returning rendered pictogram on main thread (%d bytes %@ data in UIImage %0.0f x %0.0f pixels)",waveFormData.length,[imgExt uppercaseString],result.size.width,result.size.height);
                        
                        completionBlock(result);
                      
                        [waveFormData release];
                    }];
                }
            }
            
            [assetPictogramFilepath release];
            [assetFileURL release];
            
        }  ];
        
        return nil;
    }
}

An example of invoking this :

-(void) importMediaItem {
    
    MPMediaItem* item = [self mediaItem];

    // since we will be needing this for playback, save the url to the cached audio.
    [url release];
    url = [[UIImage cachedAudioURLForMPMediaItem:item] retain];
    
    [waveFormImage release];
    
    waveFormImage = [[UIImage alloc ] initWithMPMediaItem:item completionBlock:^(UIImage* delayedImagePreparation){
        
        waveFormImage = [delayedImagePreparation retain];
        [self displayWaveFormImage];
    }];
    
    if (waveFormImage) {
        [waveFormImage retain];
        [self displayWaveFormImage];
    }
}

Logarithmic version of averaging and render methods

#define absX(x) (x<0?0-x:x)
#define minMaxX(x,mn,mx) (x<=mn?mn:(x>=mx?mx:x))
#define noiseFloor (-90.0)
#define decibel(amplitude) (20.0 * log10(absX(amplitude)/32767.0))

-(UIImage *) audioImageLogGraph:(Float32 *) samples
                normalizeMax:(Float32) normalizeMax
                 sampleCount:(NSInteger) sampleCount 
                channelCount:(NSInteger) channelCount
                 imageHeight:(float) imageHeight {
    
    CGSize imageSize = CGSizeMake(sampleCount, imageHeight);
    UIGraphicsBeginImageContext(imageSize);
    CGContextRef context = UIGraphicsGetCurrentContext();
    
    CGContextSetFillColorWithColor(context, [UIColor blackColor].CGColor);
    CGContextSetAlpha(context,1.0);
    CGRect rect;
    rect.size = imageSize;
    rect.origin.x = 0;
    rect.origin.y = 0;
    
    CGColorRef leftcolor = [[UIColor whiteColor] CGColor];
    CGColorRef rightcolor = [[UIColor redColor] CGColor];
    
    CGContextFillRect(context, rect);
    
    CGContextSetLineWidth(context, 1.0);
    
    float halfGraphHeight = (imageHeight / 2) / (float) channelCount ;
    float centerLeft = halfGraphHeight;
    float centerRight = (halfGraphHeight*3) ; 
    float sampleAdjustmentFactor = (imageHeight/ (float) channelCount) / (normalizeMax - noiseFloor) / 2;
    
    for (NSInteger intSample = 0 ; intSample < sampleCount ; intSample ++ ) {
        Float32 left = *samples++;
        float pixels = (left - noiseFloor) * sampleAdjustmentFactor;
        CGContextMoveToPoint(context, intSample, centerLeft-pixels);
        CGContextAddLineToPoint(context, intSample, centerLeft+pixels);
        CGContextSetStrokeColorWithColor(context, leftcolor);
        CGContextStrokePath(context);
        
        if (channelCount==2) {
            Float32 right = *samples++;
            float pixels = (right - noiseFloor) * sampleAdjustmentFactor;
            CGContextMoveToPoint(context, intSample, centerRight - pixels);
            CGContextAddLineToPoint(context, intSample, centerRight + pixels);
            CGContextSetStrokeColorWithColor(context, rightcolor);
            CGContextStrokePath(context); 
        }
    }
    
    // Create new image
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    
    // Tidy up
    UIGraphicsEndImageContext();   
    
    return newImage;
}

- (NSData *) renderPNGAudioPictogramLogForAsset:(AVURLAsset *)songAsset {
    
    NSError * error = nil;
    AVAssetReader * reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];
    AVAssetTrack * songTrack = [songAsset.tracks objectAtIndex:0];
    
    NSDictionary* outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys:
                                        [NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
                                        //     [NSNumber numberWithInt:44100.0],AVSampleRateKey, /*Not Supported*/
                                        //     [NSNumber numberWithInt: 2],AVNumberOfChannelsKey,    /*Not Supported*/
                                        
                                        [NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
                                        [NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved,
                                        nil];
    
    AVAssetReaderTrackOutput* output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:outputSettingsDict];
    
    [reader addOutput:output];
    [output release];
    
    UInt32 sampleRate,channelCount;
    
    NSArray* formatDesc = songTrack.formatDescriptions;
    for(unsigned int i = 0; i < [formatDesc count]; ++i) {
        CMAudioFormatDescriptionRef item = (CMAudioFormatDescriptionRef)[formatDesc objectAtIndex:i];
        const AudioStreamBasicDescription* fmtDesc = CMAudioFormatDescriptionGetStreamBasicDescription (item);
        if(fmtDesc ) {
            
            sampleRate = fmtDesc->mSampleRate;
            channelCount = fmtDesc->mChannelsPerFrame;
            
            //    NSLog(@"channels:%u, bytes/packet: %u, sampleRate %f",fmtDesc->mChannelsPerFrame, fmtDesc->mBytesPerPacket,fmtDesc->mSampleRate);
        }
    }
    
    UInt32 bytesPerSample = 2 * channelCount;
    Float32 normalizeMax = noiseFloor;
    NSLog(@"normalizeMax = %f",normalizeMax);
    NSMutableData * fullSongData = [[NSMutableData alloc] init];
    [reader startReading];
    
    UInt64 totalBytes = 0; 
    Float64 totalLeft = 0;
    Float64 totalRight = 0;
    Float32 sampleTally = 0;
    
    NSInteger samplesPerPixel = sampleRate / 50;
    
    while (reader.status == AVAssetReaderStatusReading){
        
        AVAssetReaderTrackOutput * trackOutput = (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0];
        CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer];
        
        if (sampleBufferRef){
            CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBufferRef);
            
            size_t length = CMBlockBufferGetDataLength(blockBufferRef);
            totalBytes += length;
         
            NSAutoreleasePool *wader = [[NSAutoreleasePool alloc] init];
            
            NSMutableData * data = [NSMutableData dataWithLength:length];
            CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, data.mutableBytes);
         
            SInt16 * samples = (SInt16 *) data.mutableBytes;
            int sampleCount = length / bytesPerSample;
            for (int i = 0; i < sampleCount ; i ++) {
                
                Float32 left = (Float32) *samples++;
                left = decibel(left);
                left = minMaxX(left,noiseFloor,0);
                totalLeft  += left;
                
                Float32 right;
                if (channelCount==2) {
                    right = (Float32) *samples++;
                    right = decibel(right);
                    right = minMaxX(right,noiseFloor,0);
                    totalRight += right;
                }
                
                sampleTally++;
                
                if (sampleTally > samplesPerPixel) {
                    
                    left  = totalLeft / sampleTally; 
                    if (left > normalizeMax) {
                        normalizeMax = left;
                    }
                 
                   // NSLog(@"left average = %f, normalizeMax = %f",left,normalizeMax);
                    
                    [fullSongData appendBytes:&left length:sizeof(left)];
                    
                    if (channelCount==2) {
                        right = totalRight / sampleTally; 
                        
                        if (right > normalizeMax) {
                            normalizeMax = right;
                        }
                        
                        [fullSongData appendBytes:&right length:sizeof(right)];
                    }
                    
                    totalLeft   = 0;
                    totalRight  = 0;
                    sampleTally = 0;
                }
            }
            
           [wader drain];
            
            CMSampleBufferInvalidate(sampleBufferRef);
            CFRelease(sampleBufferRef);
        }
    }
   
    NSData * finalData = nil;
    
    if (reader.status == AVAssetReaderStatusFailed || reader.status == AVAssetReaderStatusUnknown){
        // Something went wrong. Handle it.
    }
    
    if (reader.status == AVAssetReaderStatusCompleted){
        // You're done. It worked.
        
        NSLog(@"rendering output graphics using normalizeMax %f",normalizeMax);
        
        UIImage *test = [self audioImageLogGraph:(Float32 *) fullSongData.bytes                                  normalizeMax:normalizeMax                                   sampleCount:fullSongData.length / (sizeof(Float32) * 2)                                  channelCount:2                                  imageHeight:100];
        
        finalData = imageToData(test);
    }
    
    [fullSongData release];
    [reader release];
    
    return finalData;
}

comparison outputs

Linear
Linear plot for start of "Warm It Up" by Acme Swing Company

logarithmic
Logarithmic plot for start of "Warm It Up" by Acme Swing Company

Solution 2 - Iphone

You should be able to get a buffer of audio from your sampleBuffRef and then iterate through those values to build your waveform:

CMBlockBufferRef buffer = CMSampleBufferGetDataBuffer( sampleBufferRef );
CMItemCount numSamplesInBuffer = CMSampleBufferGetNumSamples(sampleBufferRef);
AudioBufferList audioBufferList;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
                                                            sampleBufferRef,
                                                            NULL,
                                                            &audioBufferList,
                                                            sizeof(audioBufferList),
                                                            NULL,
                                                            NULL,
                                                                  kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
                                                            &buffer
                                                            );
    
// this copies your audio out to a temp buffer but you should be able to iterate through this buffer instead
SInt32* readBuffer = (SInt32 *)malloc(numSamplesInBuffer * sizeof(SInt32));
memcpy( readBuffer, audioBufferList.mBuffers[0].mData, numSamplesInBuffer*sizeof(SInt32));

Solution 3 - Iphone

Another approach using Swift 5 and using AVAudioFile:

///Gets the audio file from an URL, downsaples and draws into the sound layer.
func drawSoundWave(fromURL url:URL, fromPosition:Int64, totalSeconds:UInt32, samplesSecond:CGFloat) throws{
    
    print("\(logClassName) Drawing sound from \(url)")
    
    do{
        waveViewInfo.samplesSeconds = samplesSecond
        
        //Get audio file and format from URL
        let audioFile = try AVAudioFile(forReading: url)
        
        waveViewInfo.format = audioFile.processingFormat
        audioFile.framePosition = fromPosition * Int64(waveViewInfo.format.sampleRate)
        
        //Getting the buffer
        let frameCapacity:UInt32 = totalSeconds * UInt32(waveViewInfo.format.sampleRate)
        
        guard let audioPCMBuffer = AVAudioPCMBuffer(pcmFormat: waveViewInfo.format, frameCapacity: frameCapacity) else{ throw AppError("Unable to get the AVAudioPCMBuffer") }
        try audioFile.read(into: audioPCMBuffer, frameCount: frameCapacity)
        let audioPCMBufferFloatValues:[Float] = Array(UnsafeBufferPointer(start: audioPCMBuffer.floatChannelData?.pointee,
                                                                          count: Int(audioPCMBuffer.frameLength)))
    
        waveViewInfo.points = []
        waveViewInfo.maxValue = 0
        for index in stride(from: 0, to: audioPCMBufferFloatValues.count, by: Int(audioFile.fileFormat.sampleRate) / Int(waveViewInfo.samplesSeconds)){
            
            let aSample = CGFloat(audioPCMBufferFloatValues[index])
            waveViewInfo.points.append(aSample)
            let fix = abs(aSample)
            if fix > waveViewInfo.maxValue{
                waveViewInfo.maxValue = fix
            }
            
        }
        
        print("\(logClassName) Finished the points - Count = \(waveViewInfo.points.count) / Max = \(waveViewInfo.maxValue)")
        
        populateSoundImageView(with: waveViewInfo)
        
    }
    catch{
        
        throw error
        
    }
    
}

///Converts the sound wave in to a UIImage
func populateSoundImageView(with waveViewInfo:WaveViewInfo){
    
    let imageSize:CGSize = CGSize(width: CGFloat(waveViewInfo.points.count),//CGFloat(waveViewInfo.points.count) * waveViewInfo.sampleSpace,
                                  height: frame.height)
    let drawingRect = CGRect(origin: .zero, size: imageSize)
    
    UIGraphicsBeginImageContextWithOptions(imageSize, false, 0)
    defer {
        UIGraphicsEndImageContext()
    }
    print("\(logClassName) Converting sound view in rect \(drawingRect)")
    
    guard let context:CGContext = UIGraphicsGetCurrentContext() else{ return }
    
    context.setFillColor(waveViewInfo.backgroundColor.cgColor)
    context.setAlpha(1.0)
    context.fill(drawingRect)
    context.setLineWidth(1.0)
    //        context.setLineWidth(waveViewInfo.lineWidth)
    
    let sampleAdjustFactor = imageSize.height / waveViewInfo.maxValue
    for pointIndex in waveViewInfo.points.indices{

        let pixel = waveViewInfo.points[pointIndex] * sampleAdjustFactor

        context.move(to: CGPoint(x: CGFloat(pointIndex), y: middleY - pixel))
        context.addLine(to: CGPoint(x: CGFloat(pointIndex), y: middleY + pixel))

        context.setStrokeColor(waveViewInfo.strokeColor.cgColor)
        context.strokePath()

    }
    
     //        for pointIndex in waveViewInfo.points.indices{
    //
    //            let pixel = waveViewInfo.points[pointIndex] * sampleAdjustFactor
    //
    //            context.move(to: CGPoint(x: CGFloat(pointIndex) * waveViewInfo.sampleSpace, y: middleY - pixel))
    //            context.addLine(to: CGPoint(x: CGFloat(pointIndex) * waveViewInfo.sampleSpace, y: middleY + pixel))
    //
    //            context.setStrokeColor(waveViewInfo.strokeColor.cgColor)
    //            context.strokePath()
    //
    //        }
    
    //        var xIncrement:CGFloat = 0
    //        for point in waveViewInfo.points{
    //
    //            let normalizedPoint = point * sampleAdjustFactor
    //
    //            context.move(to: CGPoint(x: xIncrement, y: middleY - normalizedPoint))
    //            context.addLine(to: CGPoint(x: xIncrement, y: middleX + normalizedPoint))
    //            context.setStrokeColor(waveViewInfo.strokeColor.cgColor)
    //            context.strokePath()
    //
    //            xIncrement += waveViewInfo.sampleSpace
    //
    //        }
    
    guard let soundWaveImage = UIGraphicsGetImageFromCurrentImageContext() else{ return }
    
    soundWaveImageView.image = soundWaveImage
    //        //In case of handling sample space in for
    //        updateWidthConstraintValue(soundWaveImage.size.width)
    updateWidthConstraintValue(soundWaveImage.size.width * waveViewInfo.sampleSpace)
    
}

WHERE

class WaveViewInfo {

    var format:AVAudioFormat!
    var samplesSeconds:CGFloat = 50
    var lineWidth:CGFloat = 0.20
    var sampleSpace:CGFloat = 0.20
    
    var strokeColor:UIColor = .red
    var backgroundColor:UIColor = .clear

    var maxValue:CGFloat = 0
    var points:[CGFloat] = [CGFloat]()
    
}

At the moment only prints one sound wave but it can be extended. The good part is that you can print an audio track by parts

Solution 4 - Iphone

A little bit refactoring from the above answers (using AVAudioFile)


import AVFoundation
import CoreGraphics
import Foundation
import UIKit

class WaveGenerator {
    private func readBuffer(_ audioUrl: URL) -> UnsafeBufferPointer<Float> {
        let file = try! AVAudioFile(forReading: audioUrl)

        let audioFormat = file.processingFormat
        let audioFrameCount = UInt32(file.length)
        guard let buffer = AVAudioPCMBuffer(pcmFormat: audioFormat, frameCapacity: audioFrameCount)
        else { return UnsafeBufferPointer<Float>(_empty: ()) }
        do {
            try file.read(into: buffer)
        } catch {
            print(error)
        }

//        let floatArray = Array(UnsafeBufferPointer(start: buffer.floatChannelData![0], count: Int(buffer.frameLength)))
        let floatArray = UnsafeBufferPointer(start: buffer.floatChannelData![0], count: Int(buffer.frameLength))

        return floatArray
    }

    private func generateWaveImage(
        _ samples: UnsafeBufferPointer<Float>,
        _ imageSize: CGSize,
        _ strokeColor: UIColor,
        _ backgroundColor: UIColor
    ) -> UIImage? {
        let drawingRect = CGRect(origin: .zero, size: imageSize)

        UIGraphicsBeginImageContextWithOptions(imageSize, false, 0)

        let middleY = imageSize.height / 2

        guard let context: CGContext = UIGraphicsGetCurrentContext() else { return nil }

        context.setFillColor(backgroundColor.cgColor)
        context.setAlpha(1.0)
        context.fill(drawingRect)
        context.setLineWidth(0.25)

        let max: CGFloat = CGFloat(samples.max() ?? 0)
        let heightNormalizationFactor = imageSize.height / max / 2
        let widthNormalizationFactor = imageSize.width / CGFloat(samples.count)
        for index in 0 ..< samples.count {
            let pixel = CGFloat(samples[index]) * heightNormalizationFactor

            let x = CGFloat(index) * widthNormalizationFactor

            context.move(to: CGPoint(x: x, y: middleY - pixel))
            context.addLine(to: CGPoint(x: x, y: middleY + pixel))

            context.setStrokeColor(strokeColor.cgColor)
            context.strokePath()
        }
        guard let soundWaveImage = UIGraphicsGetImageFromCurrentImageContext() else { return nil }

        UIGraphicsEndImageContext()
        return soundWaveImage
    }

    func generateWaveImage(from audioUrl: URL, in imageSize: CGSize) -> UIImage? {
        let samples = readBuffer(audioUrl)
        let img = generateWaveImage(samples, imageSize, UIColor.blue, UIColor.white)
        return img
    }
}

Usage

let url = Bundle.main.url(forResource: "TEST1.mp3", withExtension: "")!
let img = waveGenerator.generateWaveImage(from: url, in: CGSize(width: 600, height: 200))

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestioniFreemanView Question on Stackoverflow
Solution 1 - IphoneunsynchronizedView Answer on Stackoverflow
Solution 2 - IphoneJoelView Answer on Stackoverflow
Solution 3 - IphoneReimond HillView Answer on Stackoverflow
Solution 4 - IphoneLearnerView Answer on Stackoverflow