This website uses cookies on its adverts and sponsored links. By clicking the "Accept" button you are consenting to their use.

Read more

Accept


Mobile Tech Tracker
≡ sections
Home

Tech Advice

Tech Thoughts

Apps

Tech News

About

Welcome to Mobile Tech Tracker. Our mission is to help technically-minded people to become better versions of themselves and to help ordinary people to use modern smart technologies to their own advantage. If you want to support us, please consider visiting the pages of our advertisers.


Building .NET Core sound application - part 3

This is the third and final part of the tutorial on building a platform-independent audio app on .NET Core. In the first part of this tutorial, we talked about setting up the general project structure and enabling audio playback capabilities on Windows. The second part of the tutorial spoke about adding the ability to play audio on Linux, while also enabling the library to pull the specific code, based on the operating system the software is running on. Today, we will talk about enabling audio capabilities on Mac.

As it has been mentioned before, .NET Core is a great platform-independent technology to build software with. However, due to its platform-independent nature, it lacks some of the most basic capabilities, which were too different in implementation on different operating systems. One of these is the ability to natively play audio.

Although there are reliable ways of enabling audio playback on .NET Core, those require a large number of dependencies.

The goal of this three-part tutorial is to build our own library that will enable us to use basic playback capabilities without any additional third-party dependencies whatsoever.


Adding Mac implementation of IPlayer interface

If you have been following the previous parts of this tutorial, you will remember that we have been using the following interface that all of our OS-specific player classes implement:

using System.Threading.Tasks;

namespace NetCoreAudio.Interfaces {
  public interface IPlayer   {
    Task Play(string fileName);
    Task Pause();
    Task Resume();
    Task Stop();
  }
}

We already have classes called WindowsPlayer and LinuxPlayer which implement this interface. For this exercise, we will add a new class and will call it MacPlayer.

Once created, we will modify the logic that selects the correct implementation of the IPlayer based on the operating system the application is running on:

if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
  _internalPlayer = new WindowsPlayer();
else if (RuntimeInformation.IsOSPlatform(OSPlatform.Linux))
  _internalPlayer = new LinuxPlayer();
else if (RuntimeInformation.IsOSPlatform(OSPlatform.OSX))
  _internalPlayer = new MacPlayer();


Running Bash commands on Mac

The principles of running audio on Mac are similar to how it's done on Linux, as described in part two. Both of the operating system are based on Unix; therefore many of their internal components are similar.

Unlike Linux, Mac OS doesn't use ALSA architecture and therefore it doesn't come with aplay. However, it comes with a command line utility of its own, known as afplay.

The basic syntax of using afplay is very similar to aplay. Assuming that we have an audio file called "audio.mp3", we can run the following command to play it:

afplay audio.mp3

Just like we did with the Linux implementation, we can launch bash from the Process class and use this command to start the playback. However, this is where possibilities between two operating systems end.


Introducing command piping

Unlike aplay, afplay utility doesn't come with interactive mode, where the playback can be stopped or resumed from standard input. Pausing and resuming is done by sending certain signals with "kill" command to the running afplay process. The following command is used to pause the playback:

kill -17 <process id>

The following command can be executed to resume the playback:

kill -19 <process id>

One issue with these commands is that we need to know the id of active running process, which is not available from our calling code. This is where Unix command piping comes in.

Unix shell allows you to separate commands by pipe ("|") symbol. When done so, instead of returning the output to the standard output console, the command on the left of the pipe passes the output to the command to the right of the pipe as it's main input parameter.

So, we can pipe several commands to extract the information on running afplay process, obtain its process id and pass it into the relevant kill command. Below is an example of how to pause the playback. For resuming, just replace "-17" with "-19".

ps -A | grep -m1 'afplay' | awk '{print $1}' | kill -17

So, here is the breakdown of the command.

First, "ps" command lists running processes. "-A" flag tells it to list them all. Normally, the output would be displayed in the console in a tabular format. However, in this case, it goes directly into the next command.

"grep -m1 'afplay'" command looks for the first line in the output where the word 'afplay' is present. This line contains all of the process attributes, including its id. The line is then sent to the next command.

"awk '{print $1}'" extracts the first field from the line, which is the process id we are looking for.

The next command is the "kill" command we have covered before.

Executing this pipeline from the code is done in the same way as executing the play process. However, the pipeline needs to be executed in a separate Process object.


Wrapping up

This was the final part of a three-part tutorial on how to play audio on .NET Core. The first part is available here. The second part is available here.

What we didn't do in this tutorial is delve into detailed line-by-line implementation. This is to give you an opportunity to figure things out for yourself, while having just enough information to be able to do so.

If you want to see how these principles are implemented in practice, you can check NetCoreAudio repository on GitHub. It is also now used on the NuGet Gallery, so you can use it in your own .NET Core projects.



Written by

Posted on 3 Sep 2018

Fiodar Sazanavets is a full stack software developer with several years of experience working in various industries. In his professional career, he has mainly worked with a number of different Microsoft stack technologies, both back-end and front-end, and Java for Android. Fiodar has an Honours degree in Environmental Biology and a Masters degree in Environmental Informatics.



More from Tech Advice


Building .NET Core sound application - part 2

Building .NET Core sound application - part 2


Building .NET Core audio application - part 1

Building .NET Core audio application - part 1


Building .NET Core desktop application

Building .NET Core desktop application


How to play sound on .NET Core

How to play sound on .NET Core


How to protect your website from spammers

How to protect your website from spammers


Proven way to make programming fun

Proven way to make programming fun


Why you should care about functional programming

Why you should care about functional programming


What the heck is WebAssembly

What the heck is WebAssembly


Desktop apps are not dead. Here is why

Desktop apps are not dead. Here is why


Popular misconceptions about Node.js

Popular misconceptions about Node.js


Share this:

Facebook Google LinkedIn Twitter Become a Patron!


More from Tech Advice











Privacy Policy

© Mobile Tech Tracker. All rights reserved. Unauthorised copying of any of this website's content is prohibited under international law.

For any queries, comments or suggestions, please write to info@mobiletechtracker.co.uk.