Archive for the ‘Uncategorized’ Category

Although AI, Machine Learning and Deep learning and actually sub parts of each other but each one of them has a specific use for specific requirements. Generally, the AI is how a robot is taking a decision on making an action, Machine Learning on prediction based on the historical data and deep learning is mainly about making the Computer to understand the content of a picture, also based on the data provided. Data is the fuel of any AI project. In this blog post I will be focusing on Deep Learning.

If you have worked before or if you have learned about Neural Network, Deep Learning is the next level. Neural Network is mainly composed of input layer, output layer and the middle layer. So, in general the input layer is the data entered in the application and the output is for the data getting out the system. Now for the middle layer, it is mainly for the process that will be applied on the input data.

Now Deep Learning is the main Concept as the neural network but the main difference is it has multiple middle layers between the input and the output layers. In the following link you can find a simulation of Neural Network for an XOR function with all the math equations for the Neural Network available in the webpage whether it is a forward pass or backpropagation pass. The simulation also shows you the different values that are being changed to optimize the network for the best possible output based on the training dataset provided. Go ahead and play with the simulation. Try it out with one step forward, one iteration forward and 1000 iterations so you can see the difference in the weights that are being changed based on the data provided.

If you wanted to start with Deep Learning, some cloud vendors have already implemented their own deep learning toolkit that you can use like the CNTK from Microsoft (Code Sample), TensorFlow (Code Sample) from Google. There is also multiple platform designed for specific services that can run on top of these toolkit like the Keras platform which actually can runs on top of CNTK, TensorFlow, Theano and others. Caffe that is a newly designed deep learning designed by the

Remember the most complicated issue for any deep learning or data driven projects, is mainly the dataset, this part will take most of the time of the project, to gather the right data, normalize the data and cleanse the data. The implementation for the project won’t take much time from you as much as the work on the data itself.

You can find different projects that has been implemented on the different deep learning toolkit in the following links:

Also you can check the following link for the different deep learning projects that are actually dominating and will dominate this year.

As there has been a lot of progress in the business acceptance of the AI and the deep learning lately and it is being continuously pushed harder by the technology vendors, Ethics to work on such platform is so important to work on these services.

In the following YouTube video, a great way Microsoft is presenting the power and the impact of the AI. It all depends on how you will use this unlimited power.

So basically, why Deep Learning Now? with the explosion of the Cloud Computing providers and the different vendors that are providing their solution on top of it. the cost of running such an environment is becoming increasingly cheaper than before, especially with the newly design chips that help running such solutions from NVIDIA GPU or vendors’ designed machines like the TPU from Google and FPGA from Microsoft. Second main reason is the availability of the Data, with the sophisticated data that are being generated by the business different applications or by the Social Media, Internet of things… etc.

Understanding Azure Chat bot

Posted: March 23, 2018 in Uncategorized

In the next blog posts I will go through the different components of the Azure Chat Bot and how can you benefit from it in your application.

First of all, you can consider the chatbot is like your application but with different interface. Its main target is to facilitate the workflow for your users in a conversational format. Most of the chat bot available online is to make it easier for your users to make it easy to submit a new order for your product. One of the great benefits of the chat bot is it will give your application the ability to reach your customers through different channels easily just by some configuration on your chat bot. Also the Bots are able to communicate with your users using different formats that you can define whether using text, UI, like the HeroCard, and even using speech.

In the following picture that I got from Microsoft Website to showcase the similarity between the traditional application and the bot application.

In this blog post, I will clarify the different components of the libraries that you can use to build the bot application and its workflow. So for the Microsoft Chat Bot, they have done some work decoupling the services that you will need to work with to build your bot. There are 2 main libraries that you will be using to develop the application using the .Net:

  • The Bot Connector (.Net Library)
    • This library is mainly used to connect your bot to your channels and vice versa using the REST API. The Connector then uses the Activity object to pass the information from one side to another. There are some predefined channels that you can use to develop your bot application like Skype, Twililio and others… If you don’t see the desired channel you can start by using the Direct Line API to communicate between the Bot connector and your channel.
  • The Bot Builder (.Net Library)
    • The bot builder library is mainly used to develop your bot application whether by a guided conversation (FormFlow) or by understanding your user intention for example. The Bot Builder library has different sub libraries that helps you create the convenient bot application for your users.
  • Now for the bot application state, the bot framework is providing some services for the working with your bot state In-Memory, however for production environment it is definitely recommended that you use some storage to store your chatbot state like the Azure Cosmos DB or the Azure table storage.

Talking about the storage that you can use for your bot, there are some predeveloped functions that you can use to store the details of the application. There are some Azure Extensions for the azure table storage and the azure Cosmos DB shared in the following GitHub for both .Net and Node.js

Just a quick walkthrough for the Azure Chatbot integration with the different type of storage even if it is InMemory, you can use the following code to define the what you want to implement:

//azure cosmos DB


var cosmosdb = new DocumentDbBotDataStore(uri, key, databasename, collectionname);


//azure table storage


var tablestorage = new TableBotDataStore(new CloudTable(tableuri));


//InMemory


var inmemory = new InMemoryDataStore();

 

then you can keep updating the status of the conversation with the following code, just change the cosmosdb with the desired attribute for the storage you are willing to work with:

Conversation.UpdateContainer(builder =>

{

builder.Register(c => cosmosdb)

.Keyed<IBotDataStore<BotData>>(AzureModule.Key_DataStore)

.AsSelf().SingleInstance();

});

 

Don’t forget to download the ChatBot emulator that you can use to test running your application locally.

In the following GitHub links, there are Microsoft Chatbots code that you can work with and understand how the chat bot really works with some real demos using .Net, and I am also sharing some of the work that I have done using .Net for the chatbot that I will keep updating from time to time.

After building your first Service Bus Brokered Messaging using the queues, you can now go through this service which will help you with building the first Service Bus application using the Topic & Subscription.

First of all you have to build the namespace on the Windows Azure Portal and link it to the application through the configuration setting like this example.

After that, we can start building an application that simply send and receive the Brokered Message through the Service Bus.

The following function target’s is to send a specified message through the service bus using the Topic and Subscription.

public
string
SendMessageUsingTopicsSubscriptions(string topicname, string message, string uname)

{
var baseaddress = RoleEnvironment.GetConfigurationSettingValue(“namespaceAddress”);

var issuername = RoleEnvironment.GetConfigurationSettingValue(“issuername”);

var issuersecret = RoleEnvironment.GetConfigurationSettingValue(“issuersecret”);

Uri namespaceaddress = ServiceBusEnvironment.CreateServiceUri(“sb”, baseaddress, string.Empty);

NamespaceManager namespacemanager = new NamespaceManager(
namespaceaddress, TokenProvider.CreateSharedSecretTokenProvider(issuername, issuersecret));

MessagingFactory messagingfactory = MessagingFactory.Create(

namespaceaddress, TokenProvider.CreateSharedSecretTokenProvider(issuername, issuersecret));

var topic = namespacemanager.CreateTopic(topicname);

var sub = namespacemanager.CreateSubscription(topic.Path, “typeofmessage”);

try{
messagingfactory.CreateTopicClient(topicname)
.Send(new BrokeredMessage(new MyMessage()

{ mymessage = message, username = uname }));
return “Message sent through the Service Bus Topic”;

}

catch

{ return “Error”; }

}

Here are the main classes that control all the main instructions to the Service Bus.

BrokeredMessage: this is the unit of communication between the service bus clients. The message sent through the Brokered Message are objects or streams.

NamespaceManager: is responsible of the runtime operations no matter was the method used in Service Bus, Queue or Topic and Subscription.

MessagingFactory: is responsible of the messaging entity lifecycle whatever was its type, topic and Subscription or even the queue.

Certainly if you are using the queue in the Service Bus development, you will have to initialize the client for the use of topic and Subscription. You can do so by using the TopicClient and SubscriptionClient.

TopicClient: this is the object that helps sending the brokered message through the Service Bus using the Topic.

SubscriptionClient: This class helps receiving the brokered message from the Service Bus depending on the topic the client is subscribed to.

For the receiving function it will have the same body like the sending function except the last part for the messagingfactory function with a specified interval of time. Like the following example:

SubscriptionClient sc = messagingfactory.CreateSubscriptionClient(topicname, subscriptionname, ReceiveMode.ReceiveAndDelete);

BrokeredMessage bm = sc.Receive(new TimeSpan(0,2,0));

As for the Service Bus, the Windows Azure middleware solving the EAI (Enterprise application integration), here is the second way to do the messaging using Brokered Messaging. Clarifying what Brokered Messaging means.

The Brokered Messaging is a durable, asynchronous messaging that has several ways to achieve this like the “Queues”, the “Topics and Subscriptions”, in a way that the senders and the receivers don’t have to be online when the message is send to receive it.

Starting first with the Queues, this way of communication can be used to make the connection between two points, it is totally like the Point-to-Point messaging. For the queues, it is like any normal queue data structure, or like the Windows Azure Queue storage (all the predefined .net functions) the first message sent is the first is to be received by the receiver (FIFO). This feature also works If you do have several receivers of the message through the Service bus.

Our Next Brokered Messaging way is the Topics and the Subscriptions, for the users, they can subscribe to a specific topic and after that can easily get all the messages sent through the service bus related to the subscribed topic.

Following the last 2 posts of the Live in a Cloudy World Paper, the blob storage and the queue storage, this document is for the table storage. It contains all the necessary explanation to understand the Windows Azure Table Storage and its predefined functions.

Waiting for your feedback. J

Now I have previously showed you how to work on Windows Azure using Blob Storage, and went through the Windows Azure Cloud Blob Container Class, let me go through the Cloud Blob Client Class

void testingcloudblobclientclass()

{

//creating the account using the connection string

//creating the blob client

CloudStorageAccount account = CloudStorageAccount.FromConfigurationSetting(“ConnectionString”);

CloudBlobClient client = account.CreateCloudBlobClient();

 

//creating the directory, a subdirectory and a uploading a text in a blob

client.GetBlobDirectoryReference(“myblobdirectoryname”)

.GetSubdirectory(“thesubdirectoryname”)

.GetBlobReference(“newblobname”)

.UploadText(“uploading a text into a subdirectory in windows azure blob storage”);

 

//getting all the containers details from its list

foreach (CloudBlobContainer y in client.ListContainers())

{

y.FetchAttributes();

string containername = y.Name;

NameValueCollection containermetadata = y.Metadata;

BlobContainerProperties property = y.Properties;

}

 

//getting the details of the blobs in the list starting by a specified prefix

foreach(CloudBlob h in client.ListBlobsWithPrefix(“test”))

{

h.FetchAttributes();

string blobname = h.Name;

BlobProperties p = h.Properties;

}

}

Any program is a set of functions that the processor has to execute to give the user the required output.

A function is a set of instructions that the program executes when the function is called by the program at a certain time.

The main function is what the processor search for to start executing the program, in other words, the main function is the first function being executed by the processor. The main function can call then other functions within the program to execute certain action.

To write the program code, you most of the time will execute some predefined functions, like for example the writing and reading function which in C++, we call them cout and cin. These 2 functions were already defined in a library that allows your program to understand the functionalities iostream. I made a very small application, in the following part to help you better understanding how to start writing code. The namespace std is what allows your application to consume the standard functionalities.

For example:

#include <iostream>

using namespace std;

int add(int x, int y)

{

    return x+y;

}

int main()

{

    cout << add(4,4);

    return 0;

}

V8 JavaScript Engine

Posted: June 14, 2012 in Uncategorized

In my last few blog posts I focused on, Event-Driven Programming, JavaScript and Node.JS. Now let’s see actually this works. The JavaScript Engine is an interpreter that executes the JavaScript. These engines are most of the time used in the web browsers; each one has its own engine like for example the IE has its codename “Chakra” and for Chrome the V8. And here comes the topic of this blog post the V8.

The V8 is a very powerful engine developed by Google; it is mainly developed using C++. The main reason I am talking about the V8 is that the Node JS is based on V8. So I tried to understand how the parallelism of the incoming request is done using it. V8 is an open source engine you can work on it and learn more about it from this link. In the coming part I will try to show out how the V8 engine works and how I think this can help the cloud with solving the problem of handling the increasing number of connections.

The V8 is mainly composed of 3 things: Handle, Scope and Context. I won’t take long explaining them, but for more information you can visit this webpage. However I will explain the 3, I won’t go in deep. The first thing is the handle; this is the one responsible for pointing on the objects. The handle is also divided into 2 things, the local handle and the persistent handle. The first one is created when there is a function call, the persistent one is created and deleted when specified, this handle mainly deals with the DOM.

The Scope is the container of the handles according to this website.

For the Context this is the environment that interprets the JavaScript and executes it, one of the important things it offers is that it allows multiple JavaScript code to run in a single instance of the V8 engine.

After Explaining the main components of the V8 so far, let us try to clarify how this works for the Node JS especially it is based on V8. JavaScript running on the server side, there is no DOM to deal with but only with incoming traffic to execute the required functions and return the result. So I imagine is that with each incoming request rather than executing and creating a new context it allows you some of the incoming traffic to share the same context so executing the functions more faster.

Now redefining what Cloud Computing is, I have posted a previous blog post about it before, however the reason that I am rewriting this is I might get a better understanding for this concept or cover it from other point of views. Remember at the beginning of internet era, the internet users might face some problems one of them is the availability of the application or the website they were looking for. Let’s have a small example why this was happening. When X number of users is trying to access the web application, each connection will consume some temporary memory assuming it is around 2 MB, so the server where the website is hosted must have memory more than X*2 or else the server will be down and won’t be able to handle the incoming connection. So the main solution you must do is to increase and to enhance your hardware, regardless what enhancement is done on the software part, you will always need more hardware.

Let’s jump to another part, for the business owners, the large Enterprise companies and some of the medium will have to build their own datacenter and their IT infrastructure. Each time their business grow, the incoming traffic on their applications whether from their clients to the application hosted on their servers or the internal employees accessing the internal system, they will have to buy new hardware resources for enhancing their IT infrastructure. From the last point there are 3 main problems:

  • Availability
  • Scalability (for the memory, storage and processing power they might not find it when needed due to the limited resources they have)
  • Cost

The only solution and the main one is always enhancing the hardware and buying new servers, regardless of the Software updates. So here comes the Cloud Computing role by providing in my opinion one main thing: “Illusion of the infinite of the hardware resources”, which means that the datacenter will have thousands of servers, maybe millions where the applications are hosted. This means that the cloud can provide you with great flexibility with the storage where you can host your storage and processing power for your applications. As it is impossible for all the companies to have their own Cloud, so there is Cloud providers, like Microsoft , Google, Amazon and several others, that deliver you the Cloud Computing as a service.

Let me give you an example explaining the “as a service” meaning. In each factory, you might build your own electric generators for the high consumption of electricity. This cost the factory a lot of money as a capital cost for building it and after that paying in regular way the operation costs needed for only running this generator. Or simply the factory can take the electricity from the country as a service, they pay for the amount of electricity they use and share it with others.

The same goes for the cloud, rather than paying millions of dollars for just building your IT infrastructure, you can take it as a service from the cloud provider where you share the resources with a lot of others companies. This means that you will cut off your it infrastructure capital cost and only runs with the operation costs, depending on the amount of storage used and amount of processing power consumed. More blog posts will be followed about the layer of the cloud computing and how each layer can benefit its users.

Node JS on Windows Azure

Posted: June 8, 2012 in Uncategorized

One of the latest things people have heard about is Node JS, actually when you develop on any cloud platform available, you will certainly hear about it and you may think of using it. So first of all what is Node JS and what can it do more to help you building your application? We all know that JS stands for JavaScript and for the Node, it means the server side. Yes JavaScript on the server side. The Node JS main target is to help you build very high scalable applications over the network… bla bla bla. How is that?

Let’s imagine that you get a number of X of requests on your web application, each one of these requests consume a certain amount of memory, so your hardware resources will only be able to handle the number of memory divided by X, which is somehow limited and costs a lot. (I got this example from this link from ibm developers works). Node JS allows you to execute the coming request with more parallelism using the JavaScript
Event-Driven programming language. In this post we will talk about the same thing but for the server side.

Node is a server-side JS interpreter, it changes all how the server works processing and working with the coming request. Node.JS is a based on V8 JavaScript engine, ultra-fast engine, you can download it, read its documentation and embedded into any of your application from this link.

Now let’s go into the example, first of all after the new features added for Windows Azure for June 2012, we will need to sign in to the portal, create new website and setting its URL and its GIT, you can download and install the Node.JS tools and GIT from these 2 links. Let’s start by opening the Windows Azure PowerShell, and don’t forget to run it as administrator. The coming few instructions will help you building your web application, just write the written commands in your PowerShell Command lines.

After creating the directory, you can now create the windows azure application. But first don’t forget to change directory to the directory created.

 

If you opened the directory you made and where you created the azure service, you will find the following:

So let’s create the new WebRole for our application.

If you didn’t name the WebRole it will be named webrole1. You can also see the folder created and the files added just by navigating to the browser, you must see the following in the folder named mywebrole.

You can open, modify the server.js by simply entering the following command, you will be able to see the server.js in the notepad as the following picture shows.

 

Let’s run and see our application result

The result should be like the following picture:

To modify the application or add new pages you can add new JavaScript or modify the server.js already created.

Now let’s upload to our azure accounts, supposing you have download the Git, created a new website and set the Git Publishing credentials. To do so, after creating a website application on the Windows Azure Portal you will find a link called set Git Publishing on the right most of the window, after clicking on this link:

Now depending on what you have chosen during the installation, I mean I have chosen the Git Bash, you may have gave it the permission so you can access it through the PowerShell. In both ways, you will choose the same commands.

Make sure you are in the right directory and start initializing GIT by the following commands

Now continue and enter the following command to make the connection and the deployment of the application to the portal.

You can get this URL from your portal from:

The last command line is

Now if you simply go to the link of your application you will find the hello world we create J