Face Detection and Sentiment Analysis with Xamarin and Azure

Face Detection and Sentiment Analysis with Xamarin and Azure

Detectify Demo

This is a little Xamarin app i call Detectify that leverages the Azure FaceAPI from Azure Cognitive Services to detect a face or faces in an image and predict the attributes such as Emotion, Gender, Age, Hair color, Facial hair as well as the presence of Makeup or Glasses. I'm going to be walking you through how i built this and you can find the completed code in my repository here.

I'm going to be focusing more on the Android part because at the time of creating this project, i didn't have an iOS based device and so couldn't fully optimize for iOS.

Step 1: Setting up the Azure FaceAPI

The Azure FaceAPI gives you access to an already trained model with hundreds of thousands of data. You can find the resources you need on creating the FaceAPI here . After setting it up, you will need the API Key and Endpoint for your application to connect with the service.

Step 2: Dependencies

Create a new Xamarin.forms project with Android and iOS options. Once the project has been created, install the dependencies you will need for this application. Here's a list of all the dependencies i used. You can get them from the NuGet Package Manager in Visual Studio.

Detectify

Detectify.Android

Step 3: Splash/Launch Screen (Optional)

Once the dependencies have been installed, you can then create the splash screen. Setting up a Splash Screen in well documented here docs.microsoft.com/en-us/xamarin/xamarin-fo..

Android

In the Resources folder of your Android Project, navigate to the drawable folder and add the logo or image you want to display and also create a new XML file splash_screen.xml.

<?xml version="1.0" encoding="utf-8" ?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
  <item>
    <color android:color="@color/splash_background"/>
  </item>
  <item>
    <bitmap android:src="@drawable/Detectify"
            android:tileMode="disabled"
            android:gravity="center"/>
  </item>
</layer-list>

Navigate to the colors.xml file and define the splash screen background by adding the following line of code to the resources tag.

<color name="splash_background">#FFFFFF</color>

Next, open the styles.xml file and add the following code to the <resources> tag. styles.xml can be found in the values folder within the resources folder.

<style name="MyTheme.Base" parent="Theme.AppCompat.Light">
  </style>
  <style name="MyTheme" parent="MyTheme.Base">
  </style>

  <style name="MyTheme.Splash" parent="Theme.AppCompat.Light.NoActionBar">
    <item name="android:windowBackground">@drawable/splash_screen</item>
    <item name="android:windowNoTitle">true</item>
    <item name="android:windowFullscreen">true</item>
    <item name="android:windowContentOverlay">@null</item>
    <item name="android:windowActionBar">true</item>
  </style>

Now, in the root folder of your Android project, create a new class called Splashactivity.cs and set it as the main launcher.

Delete MainLauncher = true from MainActivity.cs.

namespace Detectify.Droid
{
    [Activity(Theme = "@style/MyTheme.Splash", MainLauncher = true, NoHistory = true)]
    class SplashActivity : AppCompatActivity
    {
        static readonly string TAG = "X:" + typeof(SplashActivity).Name;

        public override void OnCreate(Bundle savedInstanceState, PersistableBundle persistentState)
        {
            base.OnCreate(savedInstanceState, persistentState);
            Log.Debug(TAG, "SplashActivity.OnCreate");
        }

        protected override void OnResume()
        {
            base.OnResume();
            Task startupWork = new Task(() => { SimulateStartup(); });
            startupWork.Start();
        }

        public override void OnBackPressed(){ }

        async void SimulateStartup()
        {
            Log.Debug(TAG, "Performing some startup work that takes a bit of time.");
            await Task.Delay(3000);
            Log.Debug(TAG, "Startup work is finished - starting MainActivity.");
            StartActivity(new Intent(Application.Context, typeof(MainActivity)));
        }
    }
}

iOS

Use this step by step documentation on creating a launch screen docs.microsoft.com/en-us/xamarin/xamarin-fo..

Let's Go!

I like to start my development with the User interface because it gives you more insight on how the app should function. So here is the user interface we will be working with.

MainPage.png

From this, you can see that we need a button to take the photo, a switch to toggle emoji mode on and off and an icon to view the list of all the faces recognized by our application. On the MainPage.xaml, clear the default code and replace it with this

<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             xmlns:d="http://xamarin.com/schemas/2014/forms/design"
             xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
             xmlns:forms="clr-namespace:SkiaSharp.Views.Forms;assembly=SkiaSharp.Views.Forms"
             mc:Ignorable="d"
             x:Class="Detectify.MainPage"
             BackgroundColor="White">

    <NavigationPage.TitleView>
        <StackLayout Orientation="Horizontal" VerticalOptions="Start" Margin="0,0,10,0">
            <Button BackgroundColor="Black"
                    HorizontalOptions="Start"
                    VerticalOptions="Center"
                    Text="Take Photo"
                    TextColor="White"
                    FontSize="12"
                    CornerRadius="3"
                    HeightRequest="38"
                    Clicked="Button_Clicked"/>
            <StackLayout Orientation="Horizontal" VerticalOptions="Center" HorizontalOptions="EndAndExpand">
                <Label x:Name="Mode" 
                       Text="Emoji Mode"
                       TextColor="Black"
                       FontSize="14"
                       VerticalOptions="Center"
                       HorizontalOptions="EndAndExpand"
                       Margin="0"
                       Padding="0"/>
                <Switch VerticalOptions="Center" 
                        HorizontalOptions="EndAndExpand"
                        IsToggled="True"
                        Toggled="Switch_Toggled"
                        Margin="0"/>
                <ImageButton VerticalOptions="Center"
                             HorizontalOptions="EndAndExpand"
                             Clicked="Details_Page"
                             Source="List.png"
                             HeightRequest="20"
                             Margin="5,0,0,0"/>
            </StackLayout>
        </StackLayout>
    </NavigationPage.TitleView>

    <ContentPage.Content>
        <forms:SKCanvasView x:Name="Capture" PaintSurface="Capture_PaintSurface" Margin="20,0,20,20"/>
    </ContentPage.Content>
</ContentPage>

You notice that on the ContentPage.Content, we just placed some code that probably doesn't make sense there. That piece of junk makes use of SkiaSharp to display everything we need it to display but the main reason why it is there is because we need it to draw our emoji on the screen.

SkiaSharp is a .NET cross-platform 2D graphics API for .NET platforms based on Google's Skia Graphics Library (skia.org) that can be used across mobile, server and desktop models to render images.

Learn more about SkiaSharp here . Take note of the x:Name="Capture" and PaintSurface="Capture_PaintSurface". Those are the 2 variables we will need to call the render on our screen.

Next, In your Xamarin.Forms Shared Project, create a new folder called Packages and create a class FaceAPI.cs inside the Packages folder, set the values for your API Key and EndPoint and also call and initialize the FaceClient.

Your API KEY and ENDPOINT are in the Endpoints tab of your Azure Service. Log in to your Azure portal to get it if you don't have it already.

namespace Detectify.Packages
{
    public class FaceAPI
    {
        private string APIKEY = "Your API Key goes here";
        private string ENDPOINT = "https://detectify.cognitiveservices.azure.com/";
        private FaceClient faceClient;

        public FaceAPI()
        {
            InitFaceClient();
        }

        void InitFaceClient()
        {
            ApiKeyServiceClientCredentials credentials = new ApiKeyServiceClientCredentials(APIKEY);
            faceClient = new FaceClient(credentials);
            faceClient.Endpoint = ENDPOINT;
            FaceOperations faceOperations = new FaceOperations(faceClient);
        }
    }
}

Now we are going to be creating a couple of functions in our MainPage.xaml.cs

First, on Switch_Toggled in MainPage.xaml.cs, add the following code to toggle what displays on the screen between emoji placed on the face and a rectangle around the face.

private void Switch_Toggled(object sender, ToggledEventArgs e)
        {
            var mode = sender as Switch;
            if (mode.IsToggled)
            {
                drawEmoji = true;
            }
            else
            {
                drawEmoji = false;
            }
        }

Also on MainPage.xaml.cs, create a new method TakePicture() to capture the image. This makes use of the Xam.Plugin.Media package.

public async Task<MediaFile> TakePicture()
        {
            image = null;
            MediaFile mediaFile = null;
            if(CrossMedia.Current.IsCameraAvailable && CrossMedia.Current.IsTakePhotoSupported)
            {
                mediaFile = await CrossMedia.Current.TakePhotoAsync(new Plugin.Media.Abstractions.StoreCameraMediaOptions
                {
                    PhotoSize = PhotoSize.Medium,
                    RotateImage = true,
                    DefaultCamera = Plugin.Media.Abstractions.CameraDevice.Front,
                    Directory = "FaceAPI",
                    Name = "face.jpg"
                });
            }
            else
            {
                await DisplayAlert("Camera not Found", ":(No Camera Available.", "OK");
            }
            return mediaFile;
        }

While the picture is being analyzed, we want to keep the user busy so a loading or progress dialog will be very useful. Create a ShowProgressDialog() method in MainPage.xaml.cs.

private void ShowProgressDialog()
        {
            UserDialogs.Instance.ShowLoading("Analyzing", MaskType.Black);
        }

Now we will also need that progress dialog to close when the image has been analyzed and needs to be displayed so we will create another method called HideProgressDialog(); still in MainPage.xaml.cs like this

private void HideProgressDialog()
        {
            UserDialogs.Instance.HideLoading();
        }

Both HideProgressDialog() and ShowProgressDialog() are from the Acr.UserDialogs Package.

Now that we have created all these methods, we will need another method that calls all these actions in the sequence in which they need to occur which is to first take the photo, send it to the Face API service that analyzes it, next is to show the loading dialog when the photo is being analyzed and then hide it when the feedback has been gotten. After that it uses the feedback to display content on the screen. From this, you will notice that 2 steps are missing. You are right. It is the process of sending the photo to the API and receiving feedback and then displaying content on the screen. This is where SkiaSharp comes in. SkiaSharp displays the content on our screen. But first let us send that photo to the API.

Now, in the shared project, create a new folder Models, inside that create a new model class file DetectedFaceExtended.cs to save the faces especially for cases where there is more than one face in the image.

using Microsoft.Azure.CognitiveServices.Vision.Face.Models;

namespace Detectify.Models
{
    public class DetectedFaceExtended : DetectedFace
    {
        public string PredominantEmotion { get; set; }
    }
}

Next, we will create 2 methods in our FaceAPI.cs. The GetMultipleFaces() method which sends the image to the API to be analyzed and the FindDetectedEmotion() method which finds the predominant emotion and returns it as a string value. Now our FaceAPI.cs will look like this

namespace Detectify.Packages
{
    public class FaceAPI
    {
        private string APIKEY = "Your API Key goes here";
        private string ENDPOINT = "https://detectify.cognitiveservices.azure.com/";
        private FaceClient faceClient;
        public static IEnumerable<DetectedFace> faceApiResponseList;
        public FaceAPI()
        {
            InitFaceClient();
        }

         public async Task<List<DetectedFaceExtended>> GetMultipleFaces(MediaFile image)
        {
            List<DetectedFaceExtended> multipleDetectedFaces = null;
            faceApiResponseList = await faceClient.Face.DetectWithStreamAsync(image.GetStreamWithImageRotatedForExternalStorage(), true, true, Enum.GetValues(typeof(FaceAttributeType)).OfType<FaceAttributeType>().ToList());
            DetectedFaceExtended detdFace = null;

            if (faceApiResponseList.Any())
            {
                multipleDetectedFaces = new List<DetectedFaceExtended>();

                foreach (DetectedFace detectedFace in faceApiResponseList)
                {
                    detdFace = new DetectedFaceExtended
                    {
                        FaceRectangle = detectedFace.FaceRectangle,
                    };
                    detdFace.PredominantEmotion = FindDetectedEmotion(detectedFace.FaceAttributes.Emotion);

                    multipleDetectedFaces.Add(detdFace);
                }
            }
            return multipleDetectedFaces;
        }

        private string FindDetectedEmotion(Emotion emotion)
        {
            double max = 0;
            PropertyInfo info = null;

            var valueOfEmotions = typeof(Emotion).GetProperties();
            foreach(PropertyInfo propertyInfo in valueOfEmotions)
            {
                var value = (double)propertyInfo.GetValue(emotion);

                if(value > max)
                {
                    max = value;
                    info = propertyInfo;
                }
            }
            return info.Name.ToString();
        }

        void InitFaceClient()
        {
            ApiKeyServiceClientCredentials credentials = new ApiKeyServiceClientCredentials(APIKEY);
            faceClient = new FaceClient(credentials);
            faceClient.Endpoint = ENDPOINT;
            FaceOperations faceOperations = new FaceOperations(faceClient);
        }
    }
}
FaceAPI.csfaceApiResponseList = await faceClient.Face.DetectWithStreamAsync(image.GetStreamWithImageRotatedForExternalStorage(), true, true, Enum.GetValues(typeof(FaceAttributeType)).OfType<FaceAttributeType>().ToList());

This line passes the image to the FaceClient which analyzes the image an send the informations on the recognized faces.

Declare faceApiResponseList so that it can be accessible throughout the project as we will need it.

Because the Emotion variable is what we need first and always when running the application, we handle that first and always. What this means is that from the design and architecture of our application, the user may not open the face list to view other attributes but the emotion is always required. That is why we handle that at once.

The FindDetectedEmotion method gets the maximum value out of all the different emotions returned which is the predominant emotion. Emotions are returned in a JSON response like this

"emotion": {
         "anger": 0.0,
         "contempt": 0.0,
         "disgust": 0.0,
         "fear": 0.0,
         "happiness": 1.0,
         "neutral": 0.0,
         "sadness": 0.0,
         "surprise": 0.0
}

Now let us handle the SkiaSharp aspect. In the Package folder we created earlier, the one that contains our FaceAPI.cs create a new class, call it SkiaSharpDrawingPackage. and add the following code into the file

using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
using SkiaSharp;
using System;
using System.IO;
using System.Reflection;
using System.Text;

namespace Detectify.Packages
{
    public class SkiaSharpDrawingPackage
    {
        public void ClearCanvas(SKImageInfo info, SKCanvas canvas)
        {
            var paint = new SKPaint
            {
                Style = SKPaintStyle.Fill,
                Color = SKColors.White
            };
            canvas.DrawRect(info.Rect, paint);
        }
        public void DrawPrediction(SKCanvas canvas, FaceRectangle rectangle, float left,float top,float scale,string emotion, bool showEmoji)
        {
            var scaledRectangleLeft = left + (scale * rectangle.Left);
            var scaledRectangleWidth = scale * rectangle.Width;
            var scaledRectangleTop = top + (scale * rectangle.Top);
            var scaledRectangleHeight = scale * rectangle.Height;

            if (showEmoji)
            {
                SKBitmap image = GetEmojiBitmap(emotion);
                canvas.DrawBitmap(image, new SKRect(scaledRectangleLeft, scaledRectangleTop, scaledRectangleLeft + scaledRectangleWidth, scaledRectangleTop + scaledRectangleHeight));
            }
            else
            {
                DrawRectangle(canvas, scaledRectangleLeft, scaledRectangleTop, scaledRectangleWidth, scaledRectangleHeight);
                DrawText(canvas, emotion, scaledRectangleLeft, scaledRectangleTop, scaledRectangleWidth, scaledRectangleHeight);
            }
        }
        public void DrawEmoticon(SKImageInfo info, SKCanvas canvas, string emotion)
        {
            SKBitmap image = GetEmojiBitmap(emotion);
            var scale = Math.Min(info.Width / (float)image.Width, info.Height / (float)image.Height);

            var scaleHeight = scale * image.Height;
            var scaleWidth = scale * image.Width;

            var top = (info.Height - scaleHeight) / 2;
            var left = (info.Width - scaleWidth) / 2;

            canvas.DrawBitmap(image, new SKRect(left, top, left + scaleWidth, top + scaleHeight));
        }
        private SKBitmap GetEmojiBitmap(string emotion)
        {
            string resourceID = GetImageResourceID(emotion).ToString();
            Assembly assembly = GetType().GetTypeInfo().Assembly;
            SKBitmap resourceBitmap = null;
            using (Stream stream = assembly.GetManifestResourceStream(resourceID))
            {
                resourceBitmap = SKBitmap.Decode(stream);
            }
            return resourceBitmap;
        }

        private StringBuilder GetImageResourceID(string emotion)
        {
            StringBuilder resID = new StringBuilder("Detectify.Emojis.");
            switch (emotion)
            {
                case "Anger" : resID.Append("Anger");
                    break;
                case "Contempt" : resID.Append("Dislike");
                    break;
                case "Fear": resID.Append("Fear");
                    break;
                case "Disgust": resID.Append("Disgust");
                    break;
                case "Happiness": resID.Append("Happy");
                    break;
                case "Neutral": resID.Append("Neutral");
                    break;
                case "Sadness": resID.Append("Sad");
                    break;
                case "Surprise": resID.Append("Surprise");
                    break;
            }
            resID.Append(".png");
            return resID;
        }
        private SKPath CreateRectanglePath (float startLeft, float startTop, float scaledRectangleWidth, float scaledRectangleHeight)
        {
            var path = new SKPath();
            path.MoveTo(startLeft, startTop);

            path.LineTo(startLeft + scaledRectangleWidth, startTop);
            path.LineTo(startLeft + scaledRectangleWidth, startTop + scaledRectangleHeight);
            path.LineTo(startLeft, startTop + scaledRectangleHeight);
            path.LineTo(startLeft, startTop);

            return path;
        }
        private void DrawRectangle(SKCanvas canvas, SKPaint paint, float startLeft, float startTop, float scaledRectangleWidth, float scaledRectangleHeight)
        {
            var path = CreateRectanglePath(startLeft, startTop, scaledRectangleWidth, scaledRectangleHeight);
            canvas.DrawPath(path, paint);
        }
        private void DrawRectangle(SKCanvas canvas, float startLeft, float startTop, float scaledRectangleWidth, float scaledRectangleHeight)
        {
            var strokePaint = new SKPaint
            {
                IsAntialias = true,
                Style = SKPaintStyle.Stroke,
                Color = SKColors.Green,
                StrokeWidth = 5,
                //PathEffect = SKPathEffect.CreateDash(new[] { 20f, 20f }, 20f)
            };
            DrawRectangle(canvas, strokePaint, startLeft, startTop, scaledRectangleWidth, scaledRectangleHeight);

            var blurStrokePaint = new SKPaint
            {
                Color = SKColors.Green,
                Style = SKPaintStyle.Stroke,
                StrokeWidth = 5,
                PathEffect = SKPathEffect.CreateDash(new[] { 20f, 20f }, 20f),
                IsAntialias = true,
                MaskFilter = SKMaskFilter.CreateBlur(SKBlurStyle.Normal, 0.57735f * 1.0f + 0.5f)
            };
            DrawRectangle(canvas, blurStrokePaint, startLeft, startTop, scaledRectangleWidth, scaledRectangleHeight);
        }
        private void DrawText(SKCanvas canvas, string tag, float startLeft, float startTop, float scaledRectangleWidth, float scaledRectangleHeight)
        {
            var textPaint = new SKPaint
            {
                IsAntialias = true,
                Color = SKColors.White,
                Style = SKPaintStyle.Fill,
                Typeface = SKTypeface.FromFamilyName("Montserrat")
            };
            var text = tag;

            var textWidth = textPaint.MeasureText(text);
            textPaint.TextSize = 0.4f * scaledRectangleWidth * textPaint.TextSize / textWidth;

            var textBounds = new SKRect();
            textPaint.MeasureText(text, ref textBounds);

            var xText = startLeft + 10;
            var yText = startTop + (scaledRectangleHeight - 25);

            var paint = new SKPaint
            {
                Style = SKPaintStyle.Fill,
                Color = new SKColor(0, 0, 0, 120)
            };

            var backgroundRect = textBounds;
            backgroundRect.Offset(xText, yText);
            backgroundRect.Inflate(10, 10);

            canvas.DrawRoundRect(backgroundRect, 5, 5, paint);
            canvas.DrawText(text, xText, yText, textPaint);
        }
    }
}

Now lets get back to our MainPage.xaml.cs file. In there, create 2 new methods like this

private void SetImageInImageView(MediaFile mediaImage)
        {
            image = SKBitmap.Decode(mediaImage.GetStreamWithImageRotatedForExternalStorage());
            Capture.InvalidateSurface();
        }

This method displays the image on the screen.

 private void Capture_PaintSurface(object sender, SkiaSharp.Views.Forms.SKPaintSurfaceEventArgs e)
        {
            var info = e.Info;
            var canvas = e.Surface.Canvas;
            drawingPackage.ClearCanvas(info, canvas);
            if(image != null)
            {
                var scale = Math.Min(info.Width / (float)image.Width, info.Height / (float)image.Height);
                var scaleHeight = scale * image.Height;
                var scaleWidth = scale * image.Width;
                var top = (info.Height - scaleHeight) / 2;
                var left = (info.Width - scaleWidth) / 2;

                canvas.DrawBitmap(image, new SKRect(left, top, left + scaleWidth, top + scaleHeight));

                if(multipleFaces.Value.Count > 0)
                {
                    foreach(var face in multipleFaces.Value)
                    {
                        drawingPackage.DrawPrediction(canvas, face.FaceRectangle, left, top, scale, face.PredominantEmotion, drawEmoji);
                    }
                }
            }
        }

This method draws the corresponding emoji or rectangle with the text stating the emotion on all the detected faces.

Now we create the method that calls all these methods to action on the click of one single button, the TAKE PHOTO button. Create a new method TakePictureAndAnalizeImage() in MainPage.xaml.cs and add the following code

public async void TakePictureAndAnalizeImage()
        {
            capturedImage = await TakePicture();
            if(multipleFaces.Value.Count > 0)
            {
                multipleFaces.Value.Clear();
            }
            if(capturedImage != null)
            {
                ShowProgressDialog();
                SetImageInImageView(capturedImage);
                try
                {
                    var foundFaces = await faceAPI.GetMultipleFaces(capturedImage);
                    if(foundFaces != null && foundFaces.Count > 0)
                    {
                        multipleFaces.Value.AddRange(foundFaces);
                        Capture.InvalidateSurface();
                    }
                    else
                    {
                        UserDialogs.Instance.Toast("No Face Found");
                    }
                    HideProgressDialog();
                }
                catch(Exception e)
                {
                    HideProgressDialog();
                    UserDialogs.Instance.Toast("No Face Found");
                }
            }
        }

The Capture.InvalidateSurface(); tells the SkiaSharp canvas that it needs to redraw itself.

Now, we just need to call the TakePictureAndAnalizeImage() on the click of our TAKE PHOTO button which we will do with this little piece of code

private void Button_Clicked(object sender, EventArgs e)
        {
            TakePictureAndAnalizeImage();
        }

The last thing remaining on our MainPage.xaml.cs is to call navigation on our Icon click and initialize some methods. So to call navigation on Details_Page we add this to our MainPage.xaml.cs

private void Details_Page(object sender, EventArgs e)
        {
            Navigation.PushAsync(new FaceList { BindingContext = new FacesViewModel(capturedImage,FaceAPI.faceApiResponseList)});
        }

And we would initialize by adding this also to MainPage.xaml.cs

public static MediaFile capturedImage;
        public MainPage()
        {
            InitializeComponent();
            Init();
        }

        private async void Init()
        {
            faceAPI = new FaceAPI();
            drawingPackage = new SkiaSharpDrawingPackage();
            await CrossMedia.Current.Initialize();
            TakePictureAndAnalizeImage();
        }

So in general, our MainPage.xaml.cs should look like this

using Acr.UserDialogs;
using Detectify.Models;
using Detectify.Packages;
using Detectify.ViewModels;
using Plugin.Media;
using Plugin.Media.Abstractions;
using SkiaSharp;
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Threading.Tasks;
using Xamarin.Forms;

namespace Detectify
{
    [DesignTimeVisible(false)]
    public partial class MainPage : ContentPage
    {
        public static Lazy<List<DetectedFaceExtended>> multipleFaces = new Lazy<List<DetectedFaceExtended>>();
        private FaceAPI faceAPI;
        private SKBitmap image;
        private SkiaSharpDrawingPackage drawingPackage;
        private bool drawEmoji = true;

        public static MediaFile capturedImage;
        public MainPage()
        {
            InitializeComponent();
            Init();
        }

        private async void Init()
        {
            faceAPI = new FaceAPI();
            drawingPackage = new SkiaSharpDrawingPackage();
            await CrossMedia.Current.Initialize();
            TakePictureAndAnalizeImage();
        }

        private void Switch_Toggled(object sender, ToggledEventArgs e)
        {
            var mode = sender as Switch;
            if (mode.IsToggled)
            {
                drawEmoji = true;
            }
            else
            {
                drawEmoji = false;
            }
        }

        private void Capture_PaintSurface(object sender, SkiaSharp.Views.Forms.SKPaintSurfaceEventArgs e)
        {
            var info = e.Info;
            var canvas = e.Surface.Canvas;
            drawingPackage.ClearCanvas(info, canvas);
            if(image != null)
            {
                var scale = Math.Min(info.Width / (float)image.Width, info.Height / (float)image.Height);
                var scaleHeight = scale * image.Height;
                var scaleWidth = scale * image.Width;
                var top = (info.Height - scaleHeight) / 2;
                var left = (info.Width - scaleWidth) / 2;

                canvas.DrawBitmap(image, new SKRect(left, top, left + scaleWidth, top + scaleHeight));

                if(multipleFaces.Value.Count > 0)
                {
                    foreach(var face in multipleFaces.Value)
                    {
                        drawingPackage.DrawPrediction(canvas, face.FaceRectangle, left, top, scale, face.PredominantEmotion, drawEmoji);
                    }
                }
            }
        }
        private void HideProgressDialog()
        {
            UserDialogs.Instance.HideLoading();
        }

        private void Button_Clicked(object sender, EventArgs e)
        {
            TakePictureAndAnalizeImage();
        }
        private void SetImageInImageView(MediaFile mediaImage)
        {
            image = SKBitmap.Decode(mediaImage.GetStreamWithImageRotatedForExternalStorage());
            Capture.InvalidateSurface();
        }
        private void ShowProgressDialog()
        {
            UserDialogs.Instance.ShowLoading("Analyzing", MaskType.Black);
        }
        public async Task<MediaFile> TakePicture()
        {
            image = null;
            MediaFile mediaFile = null;
            if(CrossMedia.Current.IsCameraAvailable && CrossMedia.Current.IsTakePhotoSupported)
            {
                mediaFile = await CrossMedia.Current.TakePhotoAsync(new Plugin.Media.Abstractions.StoreCameraMediaOptions
                {
                    PhotoSize = PhotoSize.Medium,
                    RotateImage = true,
                    DefaultCamera = Plugin.Media.Abstractions.CameraDevice.Front,
                    Directory = "FaceAPI",
                    Name = "face.jpg"
                });
            }
            else
            {
                await DisplayAlert("Camera not Found", ":(No Camera Available.", "OK");
            }
            return mediaFile;
        }
        public async void TakePictureAndAnalizeImage()
        {
            capturedImage = await TakePicture();
            if(multipleFaces.Value.Count > 0)
            {
                multipleFaces.Value.Clear();
            }
            if(capturedImage != null)
            {
                ShowProgressDialog();
                SetImageInImageView(capturedImage);
                try
                {
                    var foundFaces = await faceAPI.GetMultipleFaces(capturedImage);
                    if(foundFaces != null && foundFaces.Count > 0)
                    {
                        multipleFaces.Value.AddRange(foundFaces);
                        Capture.InvalidateSurface();
                    }
                    else
                    {
                        UserDialogs.Instance.Toast("No Face Found");
                    }
                    HideProgressDialog();
                }
                catch(Exception e)
                {
                    HideProgressDialog();
                    UserDialogs.Instance.Toast("No Face Found");
                }
            }
        }
        private void Details_Page(object sender, EventArgs e)
        {
            Navigation.PushAsync(new FaceList { BindingContext = new FacesViewModel(capturedImage,FaceAPI.faceApiResponseList)});
        }
    }
}

You are probably getting some errors right about now. Don't worry about them, they'll be gone soon enough.

Now, from our little video we know that the menu icon takes us to a page that displays a list of all the faces recognized so we will need to create that. We will be making use of binding and listview so that the number of items returned in the listview will be just the amount of faces recognized. Remember also that the listview should be clickable to show the details of the particular face clicked. We will create a new content page in our shared project and call it FaceList.xaml. Replace the code with this

<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             xmlns:d="http://xamarin.com/schemas/2014/forms/design"
             xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
             mc:Ignorable="d"
             x:Class="Detectify.FaceList"
             Title="Recognized Faces"
             NavigationPage.IconColor="Black">
    <ContentPage.Content>
        <StackLayout>
            <ListView x:Name="Faces" ItemsSource="{Binding Faces}"
                  SelectedItem="{Binding SelectedFace}"
                  HasUnevenRows="True"
                  RowHeight="60"
                  ItemTapped="ListView_ItemTapped">
                <ListView.ItemTemplate>
                    <DataTemplate>
                        <ViewCell>
                            <StackLayout Orientation="Horizontal" Spacing="20" Margin="20,0,20,0">
                                <Image Source="{Binding Photo}" Aspect="AspectFit" HeightRequest="60"/>
                                <Label VerticalOptions="Center" Text="{Binding Description}" TextColor="Black"/>
                            </StackLayout>
                        </ViewCell>
                    </DataTemplate>
                </ListView.ItemTemplate>
            </ListView>
        </StackLayout>
    </ContentPage.Content>
</ContentPage>

Then the FaceList.xaml.cs should look like this

using Detectify.ViewModels;
using Xamarin.Forms;
using Xamarin.Forms.Xaml;

namespace Detectify
{
    [XamlCompilation(XamlCompilationOptions.Compile)]
    public partial class FaceList : ContentPage
    {
        public FaceList()
        {
            InitializeComponent();

            Faces.ItemTapped += (object sender, ItemTappedEventArgs e) =>
            {
                if (e.Item == null) return;
                if (sender is ListView lv) lv.SelectedItem = null;
            };
        }

        private void ListView_ItemTapped(object sender, ItemTappedEventArgs e)
        {
            Navigation.PushAsync(new FaceDetails { BindingContext = ((FacesViewModel)BindingContext).SelectedFace});
        }

    }
}

Now, we are going to be using some ViewModels to achieve some of these functions so create a ViewModels folder in our shared project. In the ViewModels folder, create a new class and call it ViewmodelBase.cs and fill it with the following code

using System.ComponentModel;
using System.Runtime.CompilerServices;

namespace Detectify.ViewModels
{
    public class ViewModelBase : INotifyPropertyChanged
    {
        protected bool Set<T>(ref T field, T value, [CallerMemberName] string propertyName = null)
        {
            if (Equals(field, value)) return false;

            field = value;
            RaisePropertyChanged(propertyName);

            return true;
        }

        protected void RaisePropertyChanged([CallerMemberName] string propertyName = null)
        {
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
        }

        public event PropertyChangedEventHandler PropertyChanged;
    }
}

This mostly notifies the app that a property value has changed

Next, we are going to create another ViewModel called the FaceViewmodel.cs this model is where we are going to manipulate and initialize all the other facial attributes for each detected face in the photo. It is also going to inherit from the ViewModelBase.cs class. Our code should look like this

using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
using Plugin.Media.Abstractions;
using System;
using System.Collections.Generic;
using System.Text;
using Xamarin.Forms;
using System.Linq;

namespace Detectify.ViewModels
{
    public class FaceViewModel : ViewModelBase
    {
        public StreamImageSource Photo { get; }
        public string Description { get; }
        public string Details { get; }

        public FaceViewModel (MediaFile photo, DetectedFace detectedFace)
        {
            Photo = (StreamImageSource)ImageSource.FromStream(() => photo.GetStreamWithImageRotatedForExternalStorage());

            var builder = new StringBuilder();
            builder.AppendLine($"Age: {detectedFace.FaceAttributes.Age} years old");
            builder.AppendLine($"Gender: {detectedFace.FaceAttributes.Gender}");
            builder.AppendLine($"Hair: {GetHair(detectedFace)}");
            builder.AppendLine($"Facial Hair: { GetFacialHair(detectedFace) }");
            builder.AppendLine($"Glasses: {detectedFace.FaceAttributes.Glasses}");
            builder.AppendLine($"Makeup: {GetMakeup(detectedFace)}");
            builder.AppendLine($"Emotion: {GetEmotion(detectedFace)}");

            Details = builder.ToString();
            Description = $"{detectedFace.FaceAttributes.Age} year old {detectedFace.FaceAttributes.Gender}";
        }

        private static string GetMakeup(DetectedFace detectedFace)
        {
            var makeup = (new[]
            {
                detectedFace.FaceAttributes.Makeup.EyeMakeup ? "Eyes" : "",
                detectedFace.FaceAttributes.Makeup.LipMakeup ? "Lips" : "",
            }).Where(m => !string.IsNullOrEmpty(m));

            var makeups = string.Join(", ", makeup);
            return string.IsNullOrEmpty(makeups) ? "None" : makeups;
        }
        private string GetHair(DetectedFace detectedFace)
        {
            if (detectedFace.FaceAttributes.Hair.Invisible)
                return "Hidden";
            if (detectedFace.FaceAttributes.Hair.Bald > 0.75)
                return "Bald";
            var hairColor = detectedFace.FaceAttributes.Hair.HairColor.OrderByDescending(h => h.Confidence).FirstOrDefault();
            if (hairColor == null)
                return "Unknown";
            return $"{hairColor.Color}";
        }
        private string GetFacialHair(DetectedFace detectedFace)
        {
            if (detectedFace.FaceAttributes.FacialHair.Beard < 0.1 &&
                detectedFace.FaceAttributes.FacialHair.Moustache < 0.1 &&
                detectedFace.FaceAttributes.FacialHair.Sideburns < 0.1)
                return "None";
            return $"Beard ({ detectedFace.FaceAttributes.FacialHair.Beard}), " +
                $"Moustache ({ detectedFace.FaceAttributes.FacialHair.Moustache}), " +
                $"Sideburns ({ detectedFace.FaceAttributes.FacialHair.Sideburns})";
        }
        private string GetEmotion(DetectedFace detectedFace)
        {
            var emotion = new Dictionary<String, double>
            {
                {nameof(detectedFace.FaceAttributes.Emotion.Anger), detectedFace.FaceAttributes.Emotion.Anger },
                {nameof(detectedFace.FaceAttributes.Emotion.Contempt), detectedFace.FaceAttributes.Emotion.Contempt },
                {nameof(detectedFace.FaceAttributes.Emotion.Disgust), detectedFace.FaceAttributes.Emotion.Disgust },
                {nameof(detectedFace.FaceAttributes.Emotion.Fear), detectedFace.FaceAttributes.Emotion.Fear },
                {nameof(detectedFace.FaceAttributes.Emotion.Happiness), detectedFace.FaceAttributes.Emotion.Happiness },
                {nameof(detectedFace.FaceAttributes.Emotion.Neutral), detectedFace.FaceAttributes.Emotion.Neutral },
                {nameof(detectedFace.FaceAttributes.Emotion.Sadness), detectedFace.FaceAttributes.Emotion.Sadness },
                {nameof(detectedFace.FaceAttributes.Emotion.Surprise), detectedFace.FaceAttributes.Emotion.Surprise },
            };
            return emotion.OrderByDescending(e => e.Value).First().Key;
        }
    }
}

Next we are going to create a FacesViewModel.cs still in our ViewModels folder. What this model is going to do is to initialize the FaceViewModel.cs on every face that is detected in our photo. The following codes should do the job

using System.Collections.Generic;
using Plugin.Media.Abstractions;
using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
using System.Linq;
using System;
using Acr.UserDialogs;

namespace Detectify.ViewModels
{
    public class FacesViewModel : ViewModelBase
    {

        public FacesViewModel(MediaFile photo, IEnumerable<DetectedFace> detectedFaces)
        {
            try
            {
                Faces = detectedFaces.Select(f => new FaceViewModel(photo, f));
                SelectedFace = Faces.First();
            }
            catch (Exception e)
            {
                UserDialogs.Instance.Toast("No Face Found");
            }
        }
        public IEnumerable<FaceViewModel> Faces { get; }
        FaceViewModel _selectedFace;
        public FaceViewModel SelectedFace
        {
            get => _selectedFace;
            set => Set(ref _selectedFace, value);
        }
    }
}

Finally, we are going to create a new ContentPage in our shared project called FaceDetails.xaml. This is going to display the full details of the face that is clicked from the list provided by FaceList.xaml. The FaceDetails.xaml should look like this

<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             xmlns:d="http://xamarin.com/schemas/2014/forms/design"
             xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
             mc:Ignorable="d"
             x:Class="Detectify.FaceDetails"
             NavigationPage.IconColor="Black">
    <ContentPage.Content>
        <Grid Margin="30,0,30,0">
            <Grid.RowDefinitions>
                <RowDefinition Height="*"/>
                <RowDefinition Height="Auto"/>
            </Grid.RowDefinitions>
            <Image Grid.Row="0" Source="{Binding Photo}" Aspect="AspectFill"/>
            <ScrollView Grid.Row="1">
                <Label Text="{Binding Details}" FontFamily="{StaticResource BoldFont}" LineBreakMode="WordWrap" Margin="5"/>
            </ScrollView>
        </Grid>
    </ContentPage.Content>
</ContentPage>

And the content of FaceDetails.xaml.cs should not change as the ViewModels are already handling the work. It should just remain like this

using Xamarin.Forms;
using Xamarin.Forms.Xaml;

namespace Detectify
{
    [XamlCompilation(XamlCompilationOptions.Compile)]
    public partial class FaceDetails : ContentPage
    {
        public FaceDetails()
        {
            InitializeComponent();
        }
    }
}

If you followed this carefully, there should be no error in your solution and your app should run correctly on your Android device. For an iOS based device, you might just need a few tweaks and your application should be ready to go.

Some other resources i found helpful were these posts from Jim and Aritra

medium.com/inspiredbrilliance/face-sentimen..

jimbobbennett.io/face-identification-with-a..

Feel free to reach out to me via my socials for any .NET related questions. I'm also interested in building products so if you have any idea and would love to collaborate, reach out. I am mostly active on twitter.

Cheers.