Starting Audio Analysis

<< Click to Display Table of Contents >>

Navigation:  Subtitling Assistant >

Starting Audio Analysis

To start using the Subtitling assistant you need to load the video in EZTitles and wait for it to be processed - shot changes detected and audio graph completely built up on the Timeline. The processing video file progress bar at the bottom right corner of EZTitles fills up to the end and indicates when the initial processing is completed.

After that you can click on the Subtitling Assistant drop-down and select Start/Resume Audio Analysis from there. Considering that the valid Wallet is already selected, you can check how to do it in the previous topic, you need to select Language from the Audio Language drop-down and press the OK button to start the audio analysis:

Note: Right now, the Subtitling Assistant works with same language transcriptions only – the language of the original audio must be identical to the language of the subtitles/captions that will be created in the end. Please, make sure to select the appropriate language from the Audio Language drop-down menu.

After pressing the OK button EZTitles will start uploading the audio for analysis. This will be indicated by a progress bar in the bottom right:

The audio needs to be uploaded first before the audio analysis actually starts.

The Audio Analysis can be stopped only while the audio is uploading via Subtitling Assistant/Stop Audio Analysis and the Stop... button will be grayed out whenever the upload has finished. The audio transcription/speech recognition will begin automatically once the audio finished uploading.

Note: Your wallet will be charged after the audio uploads completely.

The audio analysis takes some time to complete and the time required will be displayed below the preview list:

as well by the progress bar in the bottom right part of the status bar:

At this point it isn't necessary to keep EZTitles opened and if the audio analysis takes more than several minutes you can safely close EZTitles or switch to different project in the meantime. The audio transcription will complete and you'll be able to get it next time you open the same video file.

Note: Please mind that the transcription and audio files uploaded to the cloud will be available for limited period only:

The uploaded audio file will be deleted as soon as the audio analysis ends;

Audio transcription file will be downloaded automatically when the audio analysis has finished and the video is still opened in EZTitles. If you've closed the EZTitles in the meantime, the audio transcription fill be kept available until you reopen the same video file again but for a maximum of 30 days.

Audio transcription file will be deleted as soon as it's been downloaded or after 30 days after the audio analysis has been initially started.

Sections analysis

As previously mentioned, the audio analysis will be performed after configuring the language. This means that videos with two or more spoken languages will not be processed and analyzed correctly.

The option to Enable sections analysis provides a workaround to this by allowing you to define which parts of the video are spoken in different languages and thus activate the analysis for each of the sections in the correct language:

It's possible to add as many language sections as you need as long as they don't overlap in time. If that's the case, the issue will be promptly identified by an "Invalid audio configuration" warning.

After activating Enable sections analysis, EZTitles will automatically create one initial section spanning for the duration of the whole video. To modify its Start Time or End Time points you can either type in the respective timecode value by hand or position the video at the specified point and use the pipette button to copy the current timecode from the Timeline.

To Add new or Remove existing segment press the "arrow pointing down" button and select the respective option.

Note: It's important to mention that removing already analyzed section will not delete the analysis data for it. Instead, the section will be removed from the configuration but its data will be kept in case it's added back later on.

After the analysis of all configured sections has finished, the Timeline will get colored in light green for the duration of the sections and will mark the parts that have been analyzed by the Assistant. For more details about the different indications, colors and options available for the Timeline, please refer to the following topic.

Starting Audio Analysis from the command line

Audio Analysis can also be started from the command line. This could be particularly useful if videos need to be processed in bulk, in advance. Multiple instances of EZTitles can be used to process different video file each. By implementation, a video file can be opened for analysis in only one instance of the program. If, accidentally, another instance starts analyzing video which is processing or has been processed by another instance, the audio analysis will automatically terminate and EZTitles will close. Once the video has been analyzed the particular instance of EZTitles will automatically close as well.

The command line for starting the audio analysis is as follows:

EZTitles6.exe -o="D:\Work\For Analysis\sample.avi" -AA="D:\Work\For Analysis\Config.cfg"

 
EZTitles6.exe -o="D:\Work\For Analysis\sample.avi" -AA="D:\Work\For Analysis\Config.cfg"

The -o parameter specifies the video that will be analyzed. It can be replaced by the -v parameter which points to a file that will be imported. For more details about the -o and -v parameters, please check the EZTitles Command Line Parameters topic.

The -AA parameter must specify the path to the configuration file that will be used for the audio analysis. The configuration file, you may find sample in EZTitles' installation folder, contains just a basic configuration for the Audio Analysis:

[Analysis]

; Enter valid wallet ID with enough tokens to perform the analysis 

AutoAnalyzeWalletId=

; Specify audio language using the correct language code from this list: http://www.eztitles.com/download.php?file=sa-supported-langs

AutoAnalyzeRecLang=en-US

; Specify valid UNC patch where the log file will be created ( Optional )

LogFile=

The AutoAnalyzeWalletId parameter must point to a valid Wallet ID which will be charged during the operation.
 
The AutoAnalyzeRecLang parameter is required for the analysis itself and it must be followed by the correct language. All currently supported languages and their respective language codes are available here:
http://www.eztitles.com/download.php?file=sa-supported-langs.

The LogFile parameter is optional and will keep information of all operations performed. The path must be specified in UNC format.

The video can be analyzed in sections from the command line as well. In this case, sections can be defined by adding sequentially numbered sections to the configuration file. Like that:

[AAConfigSection1]

Language= en-US

StartTC=StartOfVideo

EndTC=00:02:43:00

[AAConfigSection2]

Language= bg-BG

StartTC=00:02:43:01

EndTC=00:04:43:00

[AAConfigSection3]

Language= fr-FR

StartTC=00:04:43:01

EndTC=EndOfVideo

The language in each section is specified by the Language parameter. For reference, here's a list of the supported languages:
http://www.eztitles.com/download.php?file=sa-supported-langs.
StartTC parameter specifies the beginning of a section. You can enter either a valid timecode value or use StartOfVideo to indicate the section starts from the very first frame of the video.
EndTC parameter indicates where a section ends. You can enter a valid timecode value for it or use EndOfVideo to indicate the section continues to the very last frame of the video.

Following the sample code from above, the Subtitling Assistant will analyze the video in three sections/parts:

1.From video's beginning (i.e. from 00:00:00:00) to 00:02:43:00 there will be a section in English.

2.From 00:02:43:01 to 00:04:43:00 - a section in Bulgarian.

3.From 00:04:43:01 to end of the video - a section in French.