Archive for the ‘Uncategorized’ Category

Although AI, Machine Learning and Deep learning and actually sub parts of each other but each one of them has a specific use for specific requirements. Generally, the AI is how a robot is taking a decision on making an action, Machine Learning on prediction based on the historical data and deep learning is mainly about making the Computer to understand the content of a picture, also based on the data provided. Data is the fuel of any AI project. In this blog post I will be focusing on Deep Learning.

If you have worked before or if you have learned about Neural Network, Deep Learning is the next level. Neural Network is mainly composed of input layer, output layer and the middle layer. So, in general the input layer is the data entered in the application and the output is for the data getting out the system. Now for the middle layer, it is mainly for the process that will be applied on the input data.

Now Deep Learning is the main Concept as the neural network but the main difference is it has multiple middle layers between the input and the output layers. In the following link you can find a simulation of Neural Network for an XOR function with all the math equations for the Neural Network available in the webpage whether it is a forward pass or backpropagation pass. The simulation also shows you the different values that are being changed to optimize the network for the best possible output based on the training dataset provided. Go ahead and play with the simulation. Try it out with one step forward, one iteration forward and 1000 iterations so you can see the difference in the weights that are being changed based on the data provided.

If you wanted to start with Deep Learning, some cloud vendors have already implemented their own deep learning toolkit that you can use like the CNTK from Microsoft (Code Sample), TensorFlow (Code Sample) from Google. There is also multiple platform designed for specific services that can run on top of these toolkit like the Keras platform which actually can runs on top of CNTK, TensorFlow, Theano and others. Caffe that is a newly designed deep learning designed by the

Remember the most complicated issue for any deep learning or data driven projects, is mainly the dataset, this part will take most of the time of the project, to gather the right data, normalize the data and cleanse the data. The implementation for the project won’t take much time from you as much as the work on the data itself.

You can find different projects that has been implemented on the different deep learning toolkit in the following links:

Also you can check the following link for the different deep learning projects that are actually dominating and will dominate this year.

As there has been a lot of progress in the business acceptance of the AI and the deep learning lately and it is being continuously pushed harder by the technology vendors, Ethics to work on such platform is so important to work on these services.

In the following YouTube video, a great way Microsoft is presenting the power and the impact of the AI. It all depends on how you will use this unlimited power.

So basically, why Deep Learning Now? with the explosion of the Cloud Computing providers and the different vendors that are providing their solution on top of it. the cost of running such an environment is becoming increasingly cheaper than before, especially with the newly design chips that help running such solutions from NVIDIA GPU or vendors’ designed machines like the TPU from Google and FPGA from Microsoft. Second main reason is the availability of the Data, with the sophisticated data that are being generated by the business different applications or by the Social Media, Internet of things… etc.

Advertisements

Understanding Azure Chat bot

Posted: March 23, 2018 in Uncategorized

In the next blog posts I will go through the different components of the Azure Chat Bot and how can you benefit from it in your application.

First of all, you can consider the chatbot is like your application but with different interface. Its main target is to facilitate the workflow for your users in a conversational format. Most of the chat bot available online is to make it easier for your users to make it easy to submit a new order for your product. One of the great benefits of the chat bot is it will give your application the ability to reach your customers through different channels easily just by some configuration on your chat bot. Also the Bots are able to communicate with your users using different formats that you can define whether using text, UI, like the HeroCard, and even using speech.

In the following picture that I got from Microsoft Website to showcase the similarity between the traditional application and the bot application.

In this blog post, I will clarify the different components of the libraries that you can use to build the bot application and its workflow. So for the Microsoft Chat Bot, they have done some work decoupling the services that you will need to work with to build your bot. There are 2 main libraries that you will be using to develop the application using the .Net:

  • The Bot Connector (.Net Library)
    • This library is mainly used to connect your bot to your channels and vice versa using the REST API. The Connector then uses the Activity object to pass the information from one side to another. There are some predefined channels that you can use to develop your bot application like Skype, Twililio and others… If you don’t see the desired channel you can start by using the Direct Line API to communicate between the Bot connector and your channel.
  • The Bot Builder (.Net Library)
    • The bot builder library is mainly used to develop your bot application whether by a guided conversation (FormFlow) or by understanding your user intention for example. The Bot Builder library has different sub libraries that helps you create the convenient bot application for your users.
  • Now for the bot application state, the bot framework is providing some services for the working with your bot state In-Memory, however for production environment it is definitely recommended that you use some storage to store your chatbot state like the Azure Cosmos DB or the Azure table storage.

Talking about the storage that you can use for your bot, there are some predeveloped functions that you can use to store the details of the application. There are some Azure Extensions for the azure table storage and the azure Cosmos DB shared in the following GitHub for both .Net and Node.js

Just a quick walkthrough for the Azure Chatbot integration with the different type of storage even if it is InMemory, you can use the following code to define the what you want to implement:

//azure cosmos DB


var cosmosdb = new DocumentDbBotDataStore(uri, key, databasename, collectionname);


//azure table storage


var tablestorage = new TableBotDataStore(new CloudTable(tableuri));


//InMemory


var inmemory = new InMemoryDataStore();

 

then you can keep updating the status of the conversation with the following code, just change the cosmosdb with the desired attribute for the storage you are willing to work with:

Conversation.UpdateContainer(builder =>

{

builder.Register(c => cosmosdb)

.Keyed<IBotDataStore<BotData>>(AzureModule.Key_DataStore)

.AsSelf().SingleInstance();

});

 

Don’t forget to download the ChatBot emulator that you can use to test running your application locally.

In the following GitHub links, there are Microsoft Chatbots code that you can work with and understand how the chat bot really works with some real demos using .Net, and I am also sharing some of the work that I have done using .Net for the chatbot that I will keep updating from time to time.

After building your first Service Bus Brokered Messaging using the queues, you can now go through this service which will help you with building the first Service Bus application using the Topic & Subscription.

First of all you have to build the namespace on the Windows Azure Portal and link it to the application through the configuration setting like this example.

After that, we can start building an application that simply send and receive the Brokered Message through the Service Bus.

The following function target’s is to send a specified message through the service bus using the Topic and Subscription.

public
string
SendMessageUsingTopicsSubscriptions(string topicname, string message, string uname)

{
var baseaddress = RoleEnvironment.GetConfigurationSettingValue(“namespaceAddress”);

var issuername = RoleEnvironment.GetConfigurationSettingValue(“issuername”);

var issuersecret = RoleEnvironment.GetConfigurationSettingValue(“issuersecret”);

Uri namespaceaddress = ServiceBusEnvironment.CreateServiceUri(“sb”, baseaddress, string.Empty);

NamespaceManager namespacemanager = new NamespaceManager(
namespaceaddress, TokenProvider.CreateSharedSecretTokenProvider(issuername, issuersecret));

MessagingFactory messagingfactory = MessagingFactory.Create(

namespaceaddress, TokenProvider.CreateSharedSecretTokenProvider(issuername, issuersecret));

var topic = namespacemanager.CreateTopic(topicname);

var sub = namespacemanager.CreateSubscription(topic.Path, “typeofmessage”);

try{
messagingfactory.CreateTopicClient(topicname)
.Send(new BrokeredMessage(new MyMessage()

{ mymessage = message, username = uname }));
return “Message sent through the Service Bus Topic”;

}

catch

{ return “Error”; }

}

Here are the main classes that control all the main instructions to the Service Bus.

BrokeredMessage: this is the unit of communication between the service bus clients. The message sent through the Brokered Message are objects or streams.

NamespaceManager: is responsible of the runtime operations no matter was the method used in Service Bus, Queue or Topic and Subscription.

MessagingFactory: is responsible of the messaging entity lifecycle whatever was its type, topic and Subscription or even the queue.

Certainly if you are using the queue in the Service Bus development, you will have to initialize the client for the use of topic and Subscription. You can do so by using the TopicClient and SubscriptionClient.

TopicClient: this is the object that helps sending the brokered message through the Service Bus using the Topic.

SubscriptionClient: This class helps receiving the brokered message from the Service Bus depending on the topic the client is subscribed to.

For the receiving function it will have the same body like the sending function except the last part for the messagingfactory function with a specified interval of time. Like the following example:

SubscriptionClient sc = messagingfactory.CreateSubscriptionClient(topicname, subscriptionname, ReceiveMode.ReceiveAndDelete);

BrokeredMessage bm = sc.Receive(new TimeSpan(0,2,0));

As for the Service Bus, the Windows Azure middleware solving the EAI (Enterprise application integration), here is the second way to do the messaging using Brokered Messaging. Clarifying what Brokered Messaging means.

The Brokered Messaging is a durable, asynchronous messaging that has several ways to achieve this like the “Queues”, the “Topics and Subscriptions”, in a way that the senders and the receivers don’t have to be online when the message is send to receive it.

Starting first with the Queues, this way of communication can be used to make the connection between two points, it is totally like the Point-to-Point messaging. For the queues, it is like any normal queue data structure, or like the Windows Azure Queue storage (all the predefined .net functions) the first message sent is the first is to be received by the receiver (FIFO). This feature also works If you do have several receivers of the message through the Service bus.

Our Next Brokered Messaging way is the Topics and the Subscriptions, for the users, they can subscribe to a specific topic and after that can easily get all the messages sent through the service bus related to the subscribed topic.

Following the last 2 posts of the Live in a Cloudy World Paper, the blob storage and the queue storage, this document is for the table storage. It contains all the necessary explanation to understand the Windows Azure Table Storage and its predefined functions.

Waiting for your feedback. J

Now I have previously showed you how to work on Windows Azure using Blob Storage, and went through the Windows Azure Cloud Blob Container Class, let me go through the Cloud Blob Client Class

void testingcloudblobclientclass()

{

//creating the account using the connection string

//creating the blob client

CloudStorageAccount account = CloudStorageAccount.FromConfigurationSetting(“ConnectionString”);

CloudBlobClient client = account.CreateCloudBlobClient();

 

//creating the directory, a subdirectory and a uploading a text in a blob

client.GetBlobDirectoryReference(“myblobdirectoryname”)

.GetSubdirectory(“thesubdirectoryname”)

.GetBlobReference(“newblobname”)

.UploadText(“uploading a text into a subdirectory in windows azure blob storage”);

 

//getting all the containers details from its list

foreach (CloudBlobContainer y in client.ListContainers())

{

y.FetchAttributes();

string containername = y.Name;

NameValueCollection containermetadata = y.Metadata;

BlobContainerProperties property = y.Properties;

}

 

//getting the details of the blobs in the list starting by a specified prefix

foreach(CloudBlob h in client.ListBlobsWithPrefix(“test”))

{

h.FetchAttributes();

string blobname = h.Name;

BlobProperties p = h.Properties;

}

}

Any program is a set of functions that the processor has to execute to give the user the required output.

A function is a set of instructions that the program executes when the function is called by the program at a certain time.

The main function is what the processor search for to start executing the program, in other words, the main function is the first function being executed by the processor. The main function can call then other functions within the program to execute certain action.

To write the program code, you most of the time will execute some predefined functions, like for example the writing and reading function which in C++, we call them cout and cin. These 2 functions were already defined in a library that allows your program to understand the functionalities iostream. I made a very small application, in the following part to help you better understanding how to start writing code. The namespace std is what allows your application to consume the standard functionalities.

For example:

#include <iostream>

using namespace std;

int add(int x, int y)

{

    return x+y;

}

int main()

{

    cout << add(4,4);

    return 0;

}