.Make certain compatibility along with numerous platforms, including.NET 6.0,. NET Platform 4.6.2, and.NET Requirement 2.0 as well as above.Reduce dependencies to stop model problems and also the demand for tiing redirects.Transcribing Sound Info.Among the primary functions of the SDK is actually audio transcription. Creators can easily translate audio documents asynchronously or in real-time. Below is an example of exactly how to translate an audio data:.making use of AssemblyAI.utilizing AssemblyAI.Transcripts.var client = new AssemblyAIClient(" YOUR_API_KEY").var records = wait for client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For regional files, identical code could be made use of to accomplish transcription.await making use of var flow = brand new FileStream("./ nbc.mp3", FileMode.Open).var transcript = await client.Transcripts.TranscribeAsync(.stream,.brand-new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK also supports real-time audio transcription making use of Streaming Speech-to-Text. This component is actually specifically helpful for uses requiring prompt handling of audio data.using AssemblyAI.Realtime.await using var scribe = brand-new RealtimeTranscriber( brand-new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Final: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for getting audio from a mic for example.GetAudio( async (part) => await transcriber.SendAudioAsync( portion)).await transcriber.CloseAsync().Utilizing LeMUR for LLM Functions.The SDK incorporates with LeMUR to allow designers to construct big foreign language model (LLM) apps on voice information. Listed here is an instance:.var lemurTaskParams = brand new LemurTaskParams.Cue="Give a quick conclusion of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var action = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Audio Intelligence Versions.Also, the SDK includes integrated help for audio knowledge designs, making it possible for feeling review and also other enhanced features.var transcript = wait for client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = correct. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// BENEFICIAL, NEUTRAL, or NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For more information, explore the main AssemblyAI blog.Image resource: Shutterstock.