How to Write using Voice Commands using Kinect

Vuyiswamb
Posted by in Kinect category on for Beginner level | Points: 250 | Views : 10900 red flag
Rating: 5 out of 5  
 1 vote(s)

In this article I demonstrate to you how to use voice commands to input controls in your Kinect application.


 Download source code for How to Write using Voice Commands using Kinect


Introduction

 
I have been doing a Banking prototype in Kinect and i would like to share some exciting small piece of code that I have written with you. I will bring other parts of the functionality that tries to bring basic fundamentals of Kinect to you. This part of functionality represent a part where a system requires a bank account number to print a bank statement for the client without a client standing in a queue.
  There are good examples that you can learn from in this Kinect for Windows SDK Programming Guide book to get you started.
 

Objective

 
In this article I am trying to give you a basic command to convert your speech into Text and process the commands you have spoken in your application and see what you have said and also get an opportunity to correct it.
 

Start

 
Create a WPF application as I have explained in the previous articles and make sure your xaml looks like this 
 

VoiceKeyBoard.xaml

 
<Window x:Class="WriteUsingVoiceCommands.VoiceKeyBoard" 
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
        Title="MainWindow" Height="157.689" Width="558.289" WindowStartupLocation="CenterScreen"   WindowStyle="ToolWindow"> 
    <Window.Background> 
        <ImageBrush ImageSource="/WriteUsingVoiceCommands;component/Images/u30354496S.jpg" /> 
    </Window.Background> 
    <Grid Margin="0,0,-6.6,217.8"> 
         
        <TextBox  x:Name="txtAccountNumber" HorizontalAlignment="Center" VerticalAlignment="Top" Width="317" Height="36"   ToolTip="Say your account Number" FontSize="20" Text=" Please say your Account Number"  Canvas.Left="160" Canvas.Top="338" Margin="104,42,137.6,-78" /> 
       <Label HorizontalAlignment="Left" x:Name="Message" Height="30" Margin="171,94,0,-124"  Content=""  VerticalAlignment="Top" Width="156" RenderTransformOrigin="0.841,0.307"/> 
    </Grid> 
</Window> 
 
And your code should look like this, I have commented the code, to explain each part or line what it is responsible for. This avoids the part that I need to explain between the codes.
 
 

VoiceKeyBoard.cs

 
 using Microsoft.Kinect; 
using Microsoft.Speech.AudioFormat; 
using Microsoft.Speech.Recognition; 
using System; 
using System.Collections.Generic; 
using System.Data; 
using System.IO; 
using System.Linq; 
using System.Text; 
using System.Threading.Tasks; 
using System.Windows; 
using System.Windows.Controls; 
using System.Windows.Data; 
using System.Windows.Documents; 
using System.Windows.Input; 
using System.Windows.Media; 
using System.Windows.Media.Imaging; 
using System.Windows.Shapes; 
  
namespace WriteUsingVoiceCommands 
{ 
    ///  
    /// Interaction logic for MainWindow.xaml 
    ///  
    public partial class VoiceKeyBoard : Window 
    { 
        #region "Voice Variables" 
        ///  
        /// Format of Kinect audio stream samples. 
        ///  
        private const EncodingFormat AudioFormat = EncodingFormat.Pcm; 
  
        ///  
        /// Samples per second in Kinect audio stream. 
        ///  
        private const int AudioSamplesPerSecond = 16000; 
  
        ///  
        /// Bits per audio sample in Kinect audio stream. 
        ///  
        private const int AudioBitsPerSample = 16; 
  
        ///  
        /// Number of channels in Kinect audio stream. 
        ///  
        private const int AudioChannels = 1; 
  
        ///  
        /// Average bytes per second in Kinect audio stream 
        ///  
        private const int AudioAverageBytesPerSecond = 32000; 
  
        ///  
        /// Block alignment in Kinect audio stream. 
        ///  
        private const int AudioBlockAlign = 2; 
  
        ///  
        /// Amount of time (in milliseconds) for which we keep sound source angle data coming from Kinect sensor. 
        ///  
        private const int AngleRetentionPeriod = 1000; 
  
        ///  
        /// Default threshold value (in [0.0,1.0] interval) used to determine wether we'll propagate a speech 
        /// event or drop it as if it had never happened. 
        ///  
        private const double DefaultConfidenceThreshold = 0.3; 
  
  
        ///  
        /// Name of speech grammar corresponding to file. Note that the name must be the same, it is case sensative 
        ///  
        private const string Onerule = "onerule"; 
        private const string tworule = "tworule"; 
        private const string threerule = "threerule"; 
        private const string fourrule = "fourrule"; 
        private const string fiverule = "fiverule"; 
        private const string sixrule = "sixrule"; 
        private const string sevenrule = "sevenrule"; 
        private const string eightrule = "eightrule"; 
        private const string ninerule = "ninerule"; 
        private const string zerorule = "zerorule"; 
        private const string deleterule = "deleterule"; 
  
        // private const string 
        ///  
        /// Speech recognizer used to detect voice commands issued by application users. 
        ///  
        private SpeechRecognizer speechRecognizer; 
  
  
        #endregion 
        #region "Grammar Variables" 
        ///  
        /// Speech grammar used during Application. 
        ///    
  
        ///Numbers  
        private Grammar OneruleGrammar; 
        private Grammar tworuleGrammar; 
        private Grammar threeruleGrammar; 
        private Grammar fourruleGrammar; 
        private Grammar fiveruleGrammar; 
        private Grammar sixruleGrammar; 
        private Grammar sevenruleGrammar; 
        private Grammar eightruleGrammar; 
        private Grammar nineruleGrammar; 
        private Grammar zeroruleGrammar; 
        //delete Command 
        private Grammar deleteGrammar; 
        #endregion 
        #region "Voice Recognition" 
        private void SpeechRecognized(object sender, SpeechRecognizerEventArgs e) 
        { 
            const string OneruleCommand = "ONE"; 
            const string tworuleCommand = "TWO"; 
            const string threeruleCommand = "THREE"; 
            const string fourruleCommand = "FOUR"; 
            const string fiveruleCommand = "FIVE"; 
            const string sixruleCommand = "SIX"; 
            const string sevenruleCommand = "SEVEN"; 
            const string eightruleCommand = "EIGHT"; 
            const string nineruleCommand = "NINE"; 
            const string zeroruleCommand = "ZERO"; 
             
            //Delete 
            const string deleteruleComand = "DELETE"; 
              
            if (null == e.SemanticValue) 
            { 
                return; 
            } 
  
  
            // Handle game mode control commands 
            switch (e.SemanticValue) 
            { 
  
                case OneruleCommand: 
  
                    DisplayWords(OneruleCommand); 
                    return; 
  
                case tworuleCommand: 
  
                    DisplayWords(tworuleCommand); 
                    return; 
  
                case threeruleCommand: 
                    DisplayWords(threeruleCommand); 
                    return; 
  
                case fourruleCommand: 
                    DisplayWords(fourruleCommand); 
                    return; 
  
                case fiveruleCommand: 
                    DisplayWords(fiveruleCommand); 
                    return; 
  
                case sixruleCommand: 
                    DisplayWords(sixruleCommand); 
                    return; 
  
                case sevenruleCommand: 
                    DisplayWords(sevenruleCommand); 
                    return; 
  
                case eightruleCommand: 
                    DisplayWords(eightruleCommand); 
                    return; 
  
                case nineruleCommand: 
                    DisplayWords(nineruleCommand); 
                    return; 
  
                case zeroruleCommand: 
                    DisplayWords(zeroruleCommand); 
                    return; 
  
  
                case deleteruleComand: 
                    DisplayWords(deleteruleComand); 
                    return; 
  
            } 
  
            // We only handle speech commands with an associated sound source angle, so we can find the 
            // associated player 
            if (!e.SourceAngle.HasValue) 
            { 
                return; 
            } 
        } 
  
  
        ///  
        /// Create a grammar from grammar definition XML file. 
        ///  
        ///  
    /// Event arguments for SpeechRecognizer. 
    ///  
    public class SpeechRecognizerEventArgs : EventArgs 
    { 
        ///  
        /// Speech phrase (text) recognized. 
        ///  
        public string Phrase { get; set; } 
  
        ///  
        /// Semantic value associated with recognized speech phrase. 
        ///  
        public string SemanticValue { get; set; } 
  
        ///  
        /// Best guess at source angle from which speech command originated. 
        ///  
        public double? SourceAngle { get; set; } 
    } 
} 
  

SpeechRecognizer.cs

  

using Microsoft.Kinect;
using Microsoft.Speech.AudioFormat;
using Microsoft.Speech.Recognition;
using System.Linq;
 
namespace WriteUsingVoiceCommands
{
 
 
 
    using System;
    using System.Collections.Generic;
    using RecentAngle = System.Collections.Generic.KeyValuePair;
 
    /// 
    /// Recognizes speech using Kinect audio stream as input source.
    /// 
    [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Design", "CA1001:TypesThatOwnDisposableFieldsShouldBeDisposable",
        Justification = "In a full-fledged application, the SpeechRecognitionEngine object should be properly disposed. For the sake of simplicity, we're omitting that code in this sample.")]
    public class SpeechRecognizer
    {
        /// 
        /// Format of Kinect audio stream samples.
        /// 
        private const EncodingFormat AudioFormat = EncodingFormat.Pcm;
 
        /// 
        /// Samples per second in Kinect audio stream.
        /// 
        private const int AudioSamplesPerSecond = 16000;
 
        /// 
        /// Bits per audio sample in Kinect audio stream.
        /// 
        private const int AudioBitsPerSample = 16;
 
        /// 
        /// Number of channels in Kinect audio stream.
        /// 
        private const int AudioChannels = 1;
 
        /// 
        /// Average bytes per second in Kinect audio stream
        /// 
        private const int AudioAverageBytesPerSecond = 32000;
 
        /// 
        /// Block alignment in Kinect audio stream.
        /// 
        private const int AudioBlockAlign = 2;
 
        /// 
        /// Amount of time (in milliseconds) for which we keep sound source angle data coming from Kinect sensor.
        /// 
        private const int AngleRetentionPeriod = 1000;
 
        /// 
        /// Default threshold value (in [0.0,1.0] interval) used to determine wether we'll propagate a speech
        /// event or drop it as if it had never happened.
        /// 
        private const double DefaultConfidenceThreshold = 0.3;
 
        /// 
        /// Queue containing Kinect sound source angle information from the most recent AngleRetentionPeriod.
        /// 
        private readonly Queue recentSourceAngles = new Queue();
 
        /// 
        /// Engine used to configure and control speech recognition behavior.
        /// 
        private readonly SpeechRecognitionEngine speechEngine;
 
        /// 
        /// Kinect audio source used to stream audio data from Kinect sensor.
        /// 
        private KinectAudioSource kinectAudioSource;
 
        /// 
        /// Initializes a new instance of the  class.
        /// 
        /// 
        /// Metadata used to identify the recognizer acoustic model to be used.
        /// 
        /// 
        /// Set of grammars to be loaded into speech recognition engine. May NOT be null.
        /// 
        private SpeechRecognizer(RecognizerInfo recognizerInfo, IEnumerable grammars)
        {
            this.ConfidenceThreshold = DefaultConfidenceThreshold;
            this.speechEngine = new SpeechRecognitionEngine(recognizerInfo);
 
            try
            {
                foreach (Grammar grammar in grammars)
                {
                    speechEngine.LoadGrammar(grammar);
                }
            }
            catch (InvalidOperationException)
            {
                // Grammar may not be in a valid state
                this.speechEngine.Dispose();
                this.speechEngine = null;
            }
        }
 
        public event EventHandler SpeechRecognized;
 
        public event EventHandler SpeechRejected;
 
        /// 
        /// Threshold value (in [0.0,1.0] interval) used to determine wether we'll propagate a speech
        /// event or drop it as if it had never happened.
        /// 
        public double ConfidenceThreshold { get; set; }
 
        /// 
        /// Creates a new instance of the  class.
        /// 
        /// 
        /// Array of grammars to be loaded into speech recognition engine.
        /// 
        /// 
        /// SpeechRecognizer constructed. May be null if a recognizer couldn't be
        /// constructed from specified parameters, or if a valid acoustic model
        /// could not be found.
        /// 
        public static SpeechRecognizer Create(Grammar[] grammars)
        {
            // Specified grammars should be valid
            if ((null == grammars) || (0 == grammars.Length))
            {
                return null;
            }
 
            var ri = GetKinectRecognizer();
            if (null == ri)
            {
                // speech prerequisites may not be installed.
                return null;
            }
 
            return new SpeechRecognizer(ri, grammars);
        }
 
        /// 
        /// Starts speech recognition using audio stream from specified KinectAudioSource.
        /// 
        /// 
        /// Audio source to use as input to speech recognizer.
        /// 
        public void Start(KinectAudioSource audioSource)
        {
            if (null == audioSource)
            {
                return;
            }
 
            this.kinectAudioSource = audioSource;
            this.kinectAudioSource.AutomaticGainControlEnabled = false;
            this.kinectAudioSource.NoiseSuppression = true;
            this.kinectAudioSource.BeamAngleMode = BeamAngleMode.Adaptive;
 
            this.kinectAudioSource.SoundSourceAngleChanged += this.SoundSourceChanged;
            this.speechEngine.SpeechRecognized += this.SreSpeechRecognized;
            this.speechEngine.SpeechRecognitionRejected += this.SreSpeechRecognitionRejected;
 
            var kinectStream = this.kinectAudioSource.Start();
            this.speechEngine.SetInputToAudioStream(
                kinectStream, new SpeechAudioFormatInfo(AudioFormat, AudioSamplesPerSecond, AudioBitsPerSample, AudioChannels, AudioAverageBytesPerSecond, AudioBlockAlign, null));
            this.speechEngine.RecognizeAsync(RecognizeMode.Multiple);
        }
 
        /// 
        /// Stop streaming Kinect audio data and recognizing speech.
        /// 
        public void Stop()
        {
            if (this.kinectAudioSource != null)
            {
                this.kinectAudioSource.Stop();
                this.speechEngine.RecognizeAsyncCancel();
                this.speechEngine.RecognizeAsyncStop();
 
                this.kinectAudioSource.SoundSourceAngleChanged -= this.SoundSourceChanged;
                this.speechEngine.SpeechRecognized -= this.SreSpeechRecognized;
                this.speechEngine.SpeechRecognitionRejected -= this.SreSpeechRecognitionRejected;
            }
        }
 
        /// 
        /// Gets the metadata for the speech recognizer (acoustic model) most suitable to
        /// process audio from Kinect device.
        /// 
        /// 
        /// RecognizerInfo if found, null otherwise.
        /// 
        private static RecognizerInfo GetKinectRecognizer()
        {
            Func matchingFunc = r =>
            {
                string value;
                r.AdditionalInfo.TryGetValue("Kinect", out value);
                return "True".Equals(value, StringComparison.OrdinalIgnoreCase) && "en-US".Equals(r.Culture.Name, StringComparison.OrdinalIgnoreCase);
            };
            return SpeechRecognitionEngine.InstalledRecognizers().Where(matchingFunc).FirstOrDefault();
        }
 
        /// 
        /// Handler for event triggered when sound source angle changes in Kinect audio stream.
        /// 
        /// 
        /// Object sending the event.
        /// 
        /// 
        /// Event arguments.
        /// 
        private void SoundSourceChanged(object sender, SoundSourceAngleChangedEventArgs e)
        {
            DateTime now = DateTime.Now;
            recentSourceAngles.Enqueue(new RecentAngle(now, e));
 
            // Remove angles past our time range of interest
            while (recentSourceAngles.Peek().Key < now.AddMilliseconds(-AngleRetentionPeriod))
            {
                recentSourceAngles.Dequeue();
            }
        }
 
        /// 
        /// Handler for rejected speech events.
        /// 
        /// 
        /// Object sending the event.
        /// 
        /// 
        /// Event arguments.
        /// 
        private void SreSpeechRecognitionRejected(object sender, SpeechRecognitionRejectedEventArgs e)
        {
            OnRejected(GetMostRecentAngle());
        }
 
        /// 
        /// Handler for recognized speech events.
        /// 
        /// 
        /// Object sending the event.
        /// 
        /// 
        /// Event arguments.
        /// 
        private void SreSpeechRecognized(object sender, SpeechRecognizedEventArgs e)
        {
            if (e.Result.Confidence < ConfidenceThreshold)
            {
                return;
            }
 
            OnRecognized(e.Result.Text, e.Result.Semantics.Value.ToString(), GetMostRecentAngle());
        }
 
        /// 
        /// Helper method that invokes SpeechRecognized event if there are any event subscribers registered.
        /// 
        /// 
        /// Speech phrase (text) recognized.
        /// 
        /// 
        /// Semantic value associated with recognized speech phrase.
        /// 
        /// 
        /// Best guess at source angle from which speech command originated.
        /// 
        private void OnRecognized(string phrase, string semanticValue, double? sourceAngle)
        {
            if (null != SpeechRecognized)
            {
                SpeechRecognized(this, new SpeechRecognizerEventArgs { Phrase = phrase, SemanticValue = semanticValue, SourceAngle = sourceAngle });
            }
        }
 
        /// 
        /// Helper method that invokes SpeechRejected event if there are any event subscribers registered.
        /// 
        /// 
        /// Best guess at source angle from which speech utterance originated.
        /// 
        private void OnRejected(double? sourceAngle)
        {
            if (null != SpeechRejected)
            {
                SpeechRejected(this, new SpeechRecognizerEventArgs { SourceAngle = sourceAngle });
            }
        }
 
        /// 
        /// Give an estimate for the average angle of sound perceived during the last AngleRetentionPeriod.
        /// 
        /// 
        /// Average angle of sound perceived. May be null if sound source events received had less than
        /// minimum acceptable confidence threshold.
        /// 
        private double? GetMostRecentAngle()
        {
            const double MinimumIndividualConfidence = 0.1;
            const double MinimumTotalConfidence = 0.25;
 
            if (recentSourceAngles.Count <= 0)
            {
                return null;
            }
 
            double totalConfidence = 0.0;
            double totalAngle = 0;
 
            foreach (RecentAngle recentAngle in recentSourceAngles)
            {
                if (recentAngle.Value.ConfidenceLevel < MinimumIndividualConfidence)
                {
                    continue;
                }
 
                totalConfidence += recentAngle.Value.ConfidenceLevel;
                totalAngle += recentAngle.Value.ConfidenceLevel * recentAngle.Value.Angle;
            }
 
            if (totalConfidence < MinimumTotalConfidence)
            {
                return null;
            }
 
            return totalAngle / totalConfidence;
        }
    }
}

 

SpeechGrammar.xml

 
This file is added as a resource
 

<grammar version="1.0" xml:lang="en-US" tag-format="semantics/1.0-literals" xmlns="http://www.w3.org/2001/06/grammar">
 
  <rule id="onerule" scope="public">
    <one-of>
      <item>
        <tag>ONE</tag>
        <one-of>
          <item>1</item>
          <item>One</item>
        </one-of>
      </item>
    </one-of>
  </rule>
 
  <rule id="tworule" scope="public">
    <one-of>
      <item>
        <tag>TWO</tag>
        <one-of>
          <item>1</item>
          <item>two</item>
        </one-of>
      </item>
    </one-of>
  </rule>
 
  <rule id="threerule" scope="public">
    <one-of>
      <item>
        <tag>THREE</tag>
        <one-of>
          <item>3</item>
          <item>Three</item>
        </one-of>
      </item>
    </one-of>
  </rule>
 
 
  <rule id="fourrule" scope="public">
    <one-of>
      <item>
        <tag>FOUR</tag>
        <one-of>
          <item>4</item>
          <item>Four</item>
        </one-of>
      </item>
    </one-of>
  </rule>
 
  <rule id="fiverule" scope="public">
    <one-of>
      <item>
        <tag>FIVE</tag>
        <one-of>
          <item>5</item>
          <item>five</item>
        </one-of>
      </item>
    </one-of>
  </rule>
 
  <rule id="sixrule" scope="public">
    <one-of>
      <item>
        <tag>SIX</tag>
        <one-of>
          <item>6</item>
          <item>Six</item>
        </one-of>
      </item>
    </one-of>
  </rule>
 
  <rule id="sevenrule" scope="public">
    <one-of>
      <item>
        <tag>SEVEN</tag>
        <one-of>
          <item>7</item>
          <item>Seven</item>
        </one-of>
      </item>
    </one-of>
  </rule>
 
  <rule id="eightrule" scope="public">
    <one-of>
      <item>
        <tag>EIGHT</tag>
        <one-of>
          <item>8</item>
          <item>Eight</item>
        </one-of>
      </item>
    </one-of>
  </rule>
 
 
  <rule id="ninerule" scope="public">
    <one-of>
      <item>
        <tag>NINE</tag>
        <one-of>
          <item>9</item>
          <item>Nine</item>
        </one-of>
      </item>
    </one-of>
  </rule>
 
  <rule id="zerorule" scope="public">
    <one-of>
      <item>
        <tag>ZERO</tag>
        <one-of>
          <item>0</item>
          <item>Zero</item>
        </one-of>
      </item>
    </one-of>
  </rule>
 
 
 
 
  <rule id="deleterule" scope="public">
    <one-of>
      <item>
        <tag>DELETE</tag>
        <one-of>
          <item>DELETE</item>
        </one-of>
      </item>
    </one-of>
  </rule>
 
</grammar>

GenericFunctions.cs

 
using System; 
using System.Collections.Generic; 
using System.Linq; 
using System.Text; 
using System.Threading.Tasks; 
  
namespace WriteUsingVoiceCommands 
{ 
    public static class GenericFunctions 
    { 
  
        //Convert Words to numners 
        public static int ConvertWordsToNumber(string NumberWord) 
        { 
            int Result = 0; 
  
            // Handle game mode control commands 
            switch (NumberWord) 
            { 
                case "ONE": 
                    Result = 1; 
                    break; 
  
                case "TWO": 
                    Result = 2; 
                    break; 
  
                case "THREE": 
                    Result = 3; 
                    break; 
  
                case "FOUR": 
                    Result = 4; 
                    break; 
  
                case "FIVE": 
                    Result = 5; 
                    break; 
  
                case "SIX": 
                    Result = 6; 
                    break; 
  
                case "SEVEN": 
                    Result = 7; 
                    break; 
  
                case "EIGHT": 
                    Result = 8; 
                    break; 
  
                case "NINE": 
                    Result = 9; 
                    break; 
  
                case "ZERO": 
                    Result = 0; 
  
                    break; 
  
            } 
  
            return Result; 
        } 
    } 
} 
 

App.xaml

 
<Application x:Class="WriteUsingVoiceCommands.App" 
             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
             StartupUri="VoiceKeyBoard.xaml"> 
    <Application.Resources> 
         
    </Application.Resources> 
</Application> 

Project Retrospective

 
When you run this project a window like the one below in figure 1.1 will appear


Figure 1.1
 
Now that your application is Running , say numbers from 1 to Zero , start with 1,2,3,4,5,6,7,8,9 and zero , you can add more commands and also don’t forget if you add a command you must declare also a grammar variable and also add it in your xml Grammar file, while you are saying the numbers the screen get refreshed with a new Number and if you made a mistake , you can say “Delete” to delete the last said number as depicted in figure 1.2


Figure 1.2
 
 

Reference

 
http://www.dotnetfunda.com/articles/article2076-speech-recognition-in-kinect.aspx
 
 

Conclusion
 

This was a part of a functionality that shed some fundamentals of voice commands in Kinect, though I have an article that had a bit of a code like that, but that article also combined with gestures and I wanted to exclude everything and present the basic fundamentals of speech recognition in Microsoft Kinect SDK.
 
Thank you again for Visiting DotNetFunda, I can’t wait for my next article. Many thanks again to Channel9 who has interest in articles from DotNetFunda.com.
 
Vuyiswa Maseko
 
Page copy protected against web site content infringement by Copyscape

About the Author

Vuyiswamb
Full Name: Vuyiswa Maseko
Member Level: NotApplicable
Member Status: Member,MVP,Administrator
Member Since: 7/6/2008 11:50:44 PM
Country: South Africa
Thank you for posting at Dotnetfunda [Administrator]
http://www.Dotnetfunda.com
Vuyiswa Junius Maseko is a Founder of Vimalsoft (Pty) Ltd (http://www.vimalsoft.com/) and a forum moderator at www.DotnetFunda. Vuyiswa has been developing for 16 years now. his major strength are C# 1.1,2.0,3.0,3.5,4.0,4.5 and vb.net and sql and his interest were in asp.net, c#, Silverlight,wpf,wcf, wwf and now his interests are in Kinect for Windows,Unity 3D. He has been using .net since the beta version of it. Vuyiswa believes that Kinect and Hololen is the next generation of computing.Thanks to people like Chris Maunder (codeproject), Colin Angus Mackay (codeproject), Dave Kreskowiak (Codeproject), Sheo Narayan (.Netfunda),Rajesh Kumar(Microsoft) They have made vuyiswa what he is today.

Login to vote for this post.

Comments or Responses

Login to post response

Comment using Facebook(Author doesn't get notification)