Tutorials > How to use the API in your projects

Copyright © 2011-2018 MultiMedia Soft

How to use the API in your projects

Previous pageReturn to chapter overviewNext page

First of all let us say that if this guide should not be clear to you, feel free to contact our support team for any doubt or curiosity you may have: we usually answer all of our customers questions in less than 24 hours. Note that the purpose of this guide is simply to give a taste of the main features available inside Audio Sound Editor API and may not cover all of the available features.


As an integration to this guide, a certain number of examples of use of this component can be found inside the "Samples" folder: if during the setup phase you left the installation directory to its default, you will find them inside "C:\Program Files\Audio Sound Editor API for .NET\Samples".


We have already seen in a previous tutorial how to add the API to various development environments like C#, Visual Basic .NET, Visual C++ (unmanaged) and Visual Basic 6, how to create a new instance of the component's class and, before starting sound editing management, how to initialize the component.


Before starting sound editing management, the component needs to be initialized: for this purpose it's mandatory a call to the InitEditor method; the best place to call this initialization method is usually the container form initialization function: for example, when using Visual C#, it will be the Form_Load function. The main purposes of calling the InitEditor method is to synchronize the component with its container form. When the API is no more needed, usually when the container application is closed, it's recommended invoking the Dispose method that frees all of the allocated resources and closes communication with the underlying drivers.



Important note


All of our audio-related components share the same instance of the multimedia engine AdjMmsEng.dll or AdjMmsEng64.dll whose initialization/disposal requires a certain amount of time because it needs to negotiate with the underlying Windows multimedia system and the sound card drivers: this operation, depending upon the system and quality of sound card drivers in use, may require some second.

In order to obtain a better efficiency when instancing the API on multiple container forms within the same application, it would be highly recommended hosting and initializing, through the InitEditor method, an instance of the API inside a form that will stay alive for the full life of the container application, for example the application's main form: although this instance of the API may stay hidden and may never be used, the memory footprint would be in any case irrelevant but it would keep the multimedia engine alive, allowing a faster loading/disposal when the container form is closed/opened.




After the component's initialization we should decide if the sound under editing will be stored in memory or inside a temporary file: this can be done through the GetStoreMode and SetStoreMode methods; it's important to note that the sound under editing will be always stored in uncompressed WAV PCM format so its total size could become an issue when dealing with low-end systems:


when stored in memory, it will be possible getting the pointer to the memory buffer containing WAV PCM data through the GetMemoryPtr method and the total size in bytes of the memory buffer through the GetMemorySize method: it's strongly suggested avoiding memory based storage for clips longer than 10 minutes.
when stored inside a temporary file, it will be possible getting its absolute pathname through the GetTempFilePathname method and its total size in bytes through the GetTempFileSize64 method.


At this point we can load a sound for editing purposes from different sources:


from a disk file through the LoadSound or LoadSoundFromRawFile methods; the most diffused file formats are supported. In case the file to load should be a multi-channel one, like 5.1 or 7.1, you could keep existing channels separated by invoking the MultiChannelLoadingModeSet method with the nMode parameter set to MULTICHAN_DOWNMIX_NONE: without this call the source audio file will be automatically converted to stereo during the loading procedure.
from a file, previously encrypted through our CryptSound™ application, through the LoadSoundEncrypted method.
from a memory buffer through the LoadSoundFromMemory or LoadSoundFromRawMemory methods; the most diffused file formats are supported.
from the system Clipboard through the LoadSoundFromClipboard method: the availability of a sound inside the Clipboard can be detected calling the IsSoundAvailableInClipboard method.
from another editing session loaded by another instance of the API through the LoadSoundFromEditingSession method.
from a recording session activated by an instance of our Audio Sound Recorder API for .NET through the LoadSoundFromRecordingSession method.
from an entry of a ZIP file through the LoadSoundFromZip method
from an external source which provides raw audio data by invoking the RawAudioFromExternalSourceStart method, followed by a number of calls to the RawAudioFromExternalSourcePush method, which allows storing raw audio data, and finally by invoking the RawAudioFromExternalSourceStop method which allows loading received raw audio data into the editor


By default, each of the mentioned loading methods will create a new editing session discarding the existing one: this behaviour can be modified through the SetLoadingMode method which allows loading a new sound without discarding the existing editing session and to perform one of the following operations:


Append the loaded sound to the existing editing session.
Insert the loaded sound at a given position within the existing editing session: the insertion position must be previously set through the SetInsertPos method.
Mix the loaded sound at a given position within the existing editing session: the mixing position must be previously set through the SetMixingPos method; it's important to note that, during the mixing phase, you can separately modify the volume of the loaded sound and the volume of the existing editing session through a previous call to the SetMixingParams method: as a further feature this method allows putting the loaded sound in loop over a given range of the existing editing session.
Overwrite the existing editing session with the loaded sound: the overwrite position must be previously set through the SetOverwritePos method.


As a further loading feature, you can limit the range of sound data that will be loaded from the given sound file with a previous call to the SetLoadingRange method, a useful feature if you want to extract a small portion of an existing sound file for editing purposes.


Information about the loaded sound file, for example the audio format, can be retrieved through the GetFileType and GetFileInfo methods.


Further editing features, applicable to the whole editing session or to a specific range, can be summarized by the following list:


Copy to the clipboard through the CopyRangeToClipboard method.
Delete through the DeleteRange method.
Delete of portions outside of a given range through ReduceToRange method.
Silence insertion through the InsertSilence method.
Trim initial and final portions of silence through the TrimSilence method.


A certain number of special effects can be applied to the existing editing session through the EffectsMan class:


Flat volume
Sliding volume
Volume automation
DMO (DirectX Media Object) effects
Chorus, Compressor, Distortion, Echo, Phaser, Reverb, Auto Wah and Dynamic amplifier effects
Internal and External Custom DSP effects
Commercial and freeware VST effects
Reverse sound
Tempo change
Playback rate change
Pitch change
IIR and FIR filters
Vocal removal filter
DC Offset removal


All of the mentioned effects are applied through separate editing sessions meaning that, for each of them, there will be a corresponding method to invoke. Through the interaction with our Active DJ Studio component, which supports most of the mentioned effects in real time during playback, you have the possibility to apply a certain amount of settings in a single editing session by invoking the Effects.PlayerSettingsApply method.


Basic noise removal is also supported for the following kind of noises:


Pops and Clicks removal
Hiss noise removal


See the How to perform noise removal tutorial for further details.


During the execution of the editing sessions the container application is notified about current advancement through the CallbackEditPerc delegate set through the CallbackEditPercSet method.


During the editing session the DisplayWaveformAnalyzer property, which implements the WaveformAnalyzer class, allows performing a deep analysis and detailed graphical visualization, with zooming and panning capabilities, of the sound's waveform; it can also produce a bitmap (in memory or into a graphic file) of the waveform itself.


IMPORTANT: The graphical visualization of the WaveformAnalyzer class on a user interface requires the component to be instanced inside a container form.


Contents of the editing session can be exported to an output file through the ExportToFile method. The encoding format used for the output file can be set through the EncodeFormats.FormatToUse property and settings specific for each available sound format can be set through the available sub-properties of the EncodeFormats property: if for example you need to export in MP3 format, you will have to deal with properties and methods available inside the EncodeFormats.MP3 property.


Through the SilencePositionsDetect method, the control supports Silence detection, a feature which allows detecting positions of silent portions inside the loaded sound and store them inside a list. The list of detected silent portions can be enumerated at a later time through the SilencePositionsNumGet and SilencePositionsGetRange methods.

Silence detection can be canceled at any time through the SilencePositionsCancelDetect method.


Related to the mentioned silenced detection, through the TracksPositionsDetect method the control support audible tracks detection also so, instead of detecting silent portions, in a very similar way it allows detecting and enumerating audible track separated by portions of silence. The list of detected audible tracks can be enumerated at a later time through the TracksPositionsNumGet and TracksPositionsRangeGet methods.

After reviewing enumerated tracks, you may decide that two tracks should be attached together in order to consider them as a single one: this can be done through a call to the TracksPositionsRangeAttachToNext method.

Audible tracks detection detection can be canceled at any time through the TracksPositionsCancelDetect method.


Finally, an editing session can be closed and its contents discarded from memory through a call to the CloseSound method.


Further capabilities of the component, not directly related to sound editing, are the following:


the JoinFilesFromDisk method allows joining two mono sound files into one single stereo output sound file.
the ExtractAudioFromVideoFile method allows extracting the audio track from a video clip and to store it inside an output sound file that may be used later for further editing.
Conversion of sound files from an audio format to another one with the possibility to keep existing tags and to apply normalization and DC Offset removal: see the How to convert the format of sound files tutorial for details.


Lengthy operations like loading of files, exporting and editing sessions, etc. are performed inside secondary threads that make use of callbacks in order to keep the container application informed about advancement of various sessions: see the How to synchronize the container application with the API tutorial for further details.