Artificial Intelligence Getting Started with Face API in C# Tutorial

Today we are going to describe how to use Cognitive Services for face recognition. We will create a simple WPF application. The application detects faces in an image, draws a frame around each face and displays a description of the face on the status bar.

What are Cognitive Services?

“Microsoft Cognitive Services (also known as Project Oxford) are a set of Application Programming Interface(APIs), Software Development Kit(SDKs) and services available for developers to make their applications more intelligent, engaging and discoverable. Microsoft Cognitive Services enables developers to easily add intelligent features – such as emotion and video detection, facial, speech and vision recognition, and speech and language understanding – into their applications.

 

Prerequisites

To use the tutorial, you need the following prerequisites:

  • Make sure Visual Studio 2013 or higher is installed.

Step 1: Subscribe to Face API key

Before trying your hand on the Face API, you must sign up to subscribe to Face API in the Microsoft Cognitive Services portal. There are two keys , either of key can be use. Follow the link to subscribe the sample key subscriptions.

Step 2: Simple C# Solution

  1. Open Visual Studio.
  2. From the File menu, click New, then Project.
  3. Select WPF for the application in the New Project dialog box.
  4. In Visual Studio 2015, expand Installed > Templates > Visual C# > Windows > Classic Desktop > and select WPF Application.
  5. In Visual Studio 2017, expand Installed > Templates > Visual C# > Windows Classic Desktop > and select WPF App (.NET Framework).
  6. Name the application FaceTutorial, then click OK.
  7. Locate the Solution Explorer, right-click your project (FaceTutorial in this case) and then click Manage NuGet Packages.
  8. In the NuGet Package Manager window, select nuget.org as your Package source.
  9. Search for Newtonsoft.Json, then Install. (In Visual Studio 2017, first click the Browse tab, then Search).

Step 3: SetUp the Face API client library

A .NET client library encapsulates the Face API REST requests. Here we use the client library to simplify our work.

Follow these instructions to configure the client library:

    1. In the Solution Explorer, right-click your project (FaceTutorial in this case) and then click Manage NuGet Packages.
    2. In the NuGet Package Manager window, select nuget.org as your Package source.
    3. Search for Microsoft.ProjectOxford.Face, then Install. (In Visual Studio 2017, first click the Browse tab, then Search).
    4. In Solution Explorer, check your project references. The references Microsoft.ProjectOxford.CommonMicrosoft.ProjectOxford.Face, and Newtonsoft.Jsonare automatically added when the installation succeeds.

Step 4: Copy and paste the initial code

  1. Open MainWindow.xaml, and replace the existing code with the following code to create the window UI:
    <Window x:Class="FaceTutorial.MainWindow"
            xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
            xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
            Title="MainWindow" Height="700" Width="960">
        <Grid x:Name="BackPanel">
            <Image x:Name="FacePhoto" Stretch="Uniform" Margin="0,0,0,50" MouseMove="FacePhoto_MouseMove" />
            <DockPanel DockPanel.Dock="Bottom">
                <Button x:Name="BrowseButton" Width="72" Height="20" VerticalAlignment="Bottom" HorizontalAlignment="Left"
                        Content="Browse..."
                        Click="BrowseButton_Click" />
                <StatusBar VerticalAlignment="Bottom">
                    <StatusBarItem>
                        <TextBlock Name="faceDescriptionStatusBar" />
                    </StatusBarItem>
                </StatusBar>
            </DockPanel>
        </Grid>
    </Window>
    
  2. Open MainWindow.xaml.cs, and replace the existing code with the following code:
    
    using System;
    using System.Collections.Generic;
    using System.IO;
    using System.Text;
    using System.Threading.Tasks;
    using System.Windows;
    using System.Windows.Input;
    using System.Windows.Media;
    using System.Windows.Media.Imaging;
    using Microsoft.ProjectOxford.Common.Contract;
    using Microsoft.ProjectOxford.Face;
    using Microsoft.ProjectOxford.Face.Contract;
    
    namespace FaceTutorial
    {
    public partial class MainWindow : Window
    {
    const string url = "https://westcentralus.api.cognitive.microsoft.com/face/v1.0";
    const string key = "3ee5094df31b4af39fa412b41*******";
    private IFaceServiceClient faceServiceClient { get; set; }
    Face[] faceCount; // The list of detected faces.
    String[] faceDescriptions; // The list of descriptions for the detected faces.
    double resizeFactor; // The resize factor for the displayed image.
    
    public MainWindow()
    {
    InitializeComponent();
    faceServiceClient = new FaceServiceClient(key, url);
    }
    
    // Displays the image and calls Detect Faces.
    
    private async void BrowseButton_Click(object sender, RoutedEventArgs e)
    {
    // Get the image file to scan from the user.
    var openDialouge = new Microsoft.Win32.OpenFileDialog();
    openDialouge.Filter = "JPEG Image(*.jpg)|*.jpg";
    bool? result = openDialouge.ShowDialog(this);
    
    // Return if canceled.
    if (!(bool)result)
    {
    return;
    }
    
    // Display the image file.
    string filePath = openDialouge.FileName;
    
    Uri fileUri = new Uri(filePath);
    BitmapImage bitmapImageSource = new BitmapImage();
    
    bitmapImageSource.BeginInit();
    bitmapImageSource.CacheOption = BitmapCacheOption.None;
    bitmapImageSource.UriSource = fileUri;
    bitmapImageSource.EndInit();
    
    ImageBox.Source = bitmapImageSource;
    }
    }
    }
    
    

    Insert your subscription key and verify the region.
    Find this line in the MainWindow.xaml.cs file (lines 28 and 29):

     const string key = "<Subscription Key>"; 
    

    Replace Subscription Key in the first parameter with your Face API subscription key from step 1.Also, check the second parameter to be sure you use the location where you obtained your subscription keys.

    Now your app can browse for a photo and display it in the window.

    Step 5: Upload images to detect faces

    We will call the asynchronous method DetectAsync of FaceServiceClient. It contains a list of faces showing in image and contains information regarding each face.

    Insert the following code in the MainWindow class:

    // Uploads the image file and calls Detect Faces.
    
     private async Task<Face[]> UploadAndDetectFaces(string imageFilePath)
    {
    // Call the Face API.
    try
    {
    using (Stream imageFileStream = File.OpenRead(imageFilePath))
    {
    Face[] faces = await faceServiceClient.DetectAsync
    (imageFileStream, returnFaceId: true,
    returnFaceLandmarks: false,
    returnFaceAttributes: new FaceAttributeType[]
    {
    FaceAttributeType.Gender,
    FaceAttributeType.Age,
    FaceAttributeType.Smile,
    FaceAttributeType.Emotion,
    FaceAttributeType.Glasses,
    FaceAttributeType.Hair
    }
    );
    return faces;
    }
    }
    catch (Exception e)
    {
    MessageBox.Show(e.Message, "Error");
    return new Face[0];
    }
    }
    

    Step 6: Mark faces in the image

    In this step, we combine all the previous steps and mark the detected faces in the image.

    
    

    Insert the following code at the end of the BrowseButton_Click event handler:

    
    // Detect any faces in the image.
    Title = "Detecting the faces...";
    faceCount = await UploadAndDetectFaces(filePath);
    Title = String.Format("Detection Finished. {0} face(s) detected", faceCount.Length);
    
    if (faceCount.Length &gt; 0)
    {
    // Prepare to draw rectangles around the faces.
    DrawingVisual visual = new DrawingVisual();
    DrawingContext drawingContext = visual.RenderOpen();
    drawingContext.DrawImage(bitmapImageSource,
    new Rect(0, 0, bitmapImageSource.Width, bitmapImageSource.Height));
    double dpi = bitmapImageSource.DpiX;
    resizeFactor = 96 / dpi;
    faceDescriptions = new String[faceCount.Length];
    
    for (int i = 0; i &amp;lt; faceCount.Length; ++i)
    {
    Face face = faceCount[i];
    
    // Draw a rectangle on the face.
    drawingContext.DrawRectangle(
    Brushes.Transparent,
    new Pen(Brushes.Red, 2),
    new Rect(
    face.FaceRectangle.Left * resizeFactor,
    face.FaceRectangle.Top * resizeFactor,
    face.FaceRectangle.Width * resizeFactor,
    face.FaceRectangle.Height * resizeFactor
    )
    );
    
    // Store the face description.
    faceDescriptions[i] = FaceDescription(face);
    }
    
    drawingContext.Close();
    
    // Display the image with the rectangle around the face.
    RenderTargetBitmap faceWithRectBitmap = new RenderTargetBitmap(
    (int)(bitmapImageSource.PixelWidth * resizeFactor),
    (int)(bitmapImageSource.PixelHeight * resizeFactor),
    96,
    96,
    PixelFormats.Pbgra32);
    
    faceWithRectBitmap.Render(visual);
    ImageBox.Source = faceWithRectBitmap;
    
    // Set the status bar text.
    faceStatusBar.Text = "Place the mouse arrow over a face to see the face description.";
    }
    

    Step 7: Describe faces in the image

    Here,  we will examine the face properties and generate a string to describe the face. This string displays all coded details when the mouse pointer hovers over the face rectangle.

    And add this method to the MainWindow class to convert the face details into a string:

     

    // Returns a string that describes the given face.&amp;amp;amp;lt;/pre&amp;amp;amp;gt;
    
    private string FaceDescription(Face face)
    {
    StringBuilder sb = new StringBuilder();
    
    sb.Append("Face: ");
    
    // Add the gender, age, and smile.
    sb.Append(face.FaceAttributes.Gender);
    sb.Append(", ");
    sb.Append(face.FaceAttributes.Age);
    sb.Append(", ");
    sb.Append(String.Format("smile {0:F1}%, ", face.FaceAttributes.Smile * 100));
    
    // Add the emotions. Display all emotions over 10%.
    sb.Append("Emotion: ");
    EmotionScores emotionScores = face.FaceAttributes.Emotion;
                if (emotionScores.Anger >= 0.1f) sb.Append(String.Format("anger {0:F1}%, ", emotionScores.Anger * 100));
                if (emotionScores.Contempt >= 0.1f) sb.Append(String.Format("contempt {0:F1}%, ", emotionScores.Contempt * 100));
                if (emotionScores.Disgust >= 0.1f) sb.Append(String.Format("disgust {0:F1}%, ", emotionScores.Disgust * 100));
                if (emotionScores.Fear >= 0.1f) sb.Append(String.Format("fear {0:F1}%, ", emotionScores.Fear * 100));
                if (emotionScores.Happiness >= 0.1f) sb.Append(String.Format("happiness {0:F1}%, ", emotionScores.Happiness * 100));
                if (emotionScores.Neutral >= 0.1f) sb.Append(String.Format("neutral {0:F1}%, ", emotionScores.Neutral * 100));
                if (emotionScores.Sadness >= 0.1f) sb.Append(String.Format("sadness {0:F1}%, ", emotionScores.Sadness * 100));
                if (emotionScores.Surprise >= 0.1f) sb.Append(String.Format("surprise {0:F1}%, ", emotionScores.Surprise * 100));
    
    
    // Add glasses.
    sb.Append(face.FaceAttributes.Glasses);
    sb.Append(", ");
    
    // Add hair.
    sb.Append("Hair: ");
    
    // Display baldness confidence if over 1%.
    if (face.FaceAttributes.Hair.Bald >= 0.01f)
    sb.Append(String.Format("bald {0:F1}% ", face.FaceAttributes.Hair.Bald * 100));
    
    // Display all hair color attributes over 10%.
    HairColor[] hairColors = face.FaceAttributes.Hair.HairColor;
    foreach (HairColor hairColor in hairColors)
    {
    if (hairColor.Confidence >= 0.1f)
    {
    sb.Append(hairColor.Color.ToString());
    sb.Append(String.Format(" {0:F1}% ", hairColor.Confidence * 100));
    }
    }
    
    // Return the built string.
    return sb.ToString();
    }
    

    Step 8: Display the face description

    Add the ImageBox_MouseMove method with the following code:

     

    // Displays the face description when the mouse is over a face rectangle.
    private void ImageBox_MouseMove(object sender, MouseEventArgs e)
    {
    // If the REST call has not completed, return from this method.
    if (faceCount == null)
    return;
    
    // Find the mouse position relative to the image.
    Point mouseXY = e.GetPosition(ImageBox);
    
    ImageSource imageSource = ImageBox.Source;
    BitmapSource bitmapSource = (BitmapSource)imageSource;
    
    // Scale adjustment between the actual size and displayed size.
    var scale = ImageBox.ActualWidth / (bitmapSource.PixelWidth / resizeFactor);
    
    // Check if this mouse position is over a face rectangle.
    bool mouseOverFace = false;
    
    for (int i = 0; i < faceCount.Length; ++i) { FaceRectangle fr = faceCount[i].FaceRectangle; double left = fr.Left * scale; double top = fr.Top * scale; double width = fr.Width * scale; double height = fr.Height * scale; // Display the face description for this face if the mouse is over this face rectangle. if (mouseXY.X &amp;amp;gt;= left &amp;amp;amp;&amp;amp;amp; mouseXY.X &amp;amp;lt;= left + width &amp;amp;amp;&amp;amp;amp; mouseXY.Y &amp;amp;gt;= top &amp;amp;amp;&amp;amp;amp; mouseXY.Y &amp;amp;lt;= top + height)
    {
    faceStatusBar.Text = faceDescriptions[i];
    mouseOverFace = true;
    break;
    }
    }
    
    // If the mouse is not over a face rectangle.
    if (!mouseOverFace)
    faceStatusBar.Text = "Place the mouse pointer over a face to see the face description.";
    }
    

    Run this application and browse for an image containing a face. Wait for a few seconds to allow the cloud API to respond. After that, you will see a red rectangle on the faces in the image. By moving the mouse over the face rectangle, the description of the face appears on the status bar:

    Image 1 :

    Image 2 :

    Download :

    you can download complete source code from code-adda github group

    https://github.com/code-adda/FaceReognition

    Summary

    So we have learned the basic process for using the Face API and created an application to display face marks in images.

Share Knowledge

One thought on “Artificial Intelligence Getting Started with Face API in C# Tutorial”

Leave a Reply

Your email address will not be published. Required fields are marked *