Archive for the ‘Uncategorized’ Category

After building your first Service Bus Brokered Messaging using the queues, you can now go through this service which will help you with building the first Service Bus application using the Topic & Subscription.

First of all you have to build the namespace on the Windows Azure Portal and link it to the application through the configuration setting like this example.

After that, we can start building an application that simply send and receive the Brokered Message through the Service Bus.

The following function target’s is to send a specified message through the service bus using the Topic and Subscription.

SendMessageUsingTopicsSubscriptions(string topicname, string message, string uname)

var baseaddress = RoleEnvironment.GetConfigurationSettingValue(“namespaceAddress”);

var issuername = RoleEnvironment.GetConfigurationSettingValue(“issuername”);

var issuersecret = RoleEnvironment.GetConfigurationSettingValue(“issuersecret”);

Uri namespaceaddress = ServiceBusEnvironment.CreateServiceUri(“sb”, baseaddress, string.Empty);

NamespaceManager namespacemanager = new NamespaceManager(
namespaceaddress, TokenProvider.CreateSharedSecretTokenProvider(issuername, issuersecret));

MessagingFactory messagingfactory = MessagingFactory.Create(

namespaceaddress, TokenProvider.CreateSharedSecretTokenProvider(issuername, issuersecret));

var topic = namespacemanager.CreateTopic(topicname);

var sub = namespacemanager.CreateSubscription(topic.Path, “typeofmessage”);

.Send(new BrokeredMessage(new MyMessage()

{ mymessage = message, username = uname }));
return “Message sent through the Service Bus Topic”;



{ return “Error”; }


Here are the main classes that control all the main instructions to the Service Bus.

BrokeredMessage: this is the unit of communication between the service bus clients. The message sent through the Brokered Message are objects or streams.

NamespaceManager: is responsible of the runtime operations no matter was the method used in Service Bus, Queue or Topic and Subscription.

MessagingFactory: is responsible of the messaging entity lifecycle whatever was its type, topic and Subscription or even the queue.

Certainly if you are using the queue in the Service Bus development, you will have to initialize the client for the use of topic and Subscription. You can do so by using the TopicClient and SubscriptionClient.

TopicClient: this is the object that helps sending the brokered message through the Service Bus using the Topic.

SubscriptionClient: This class helps receiving the brokered message from the Service Bus depending on the topic the client is subscribed to.

For the receiving function it will have the same body like the sending function except the last part for the messagingfactory function with a specified interval of time. Like the following example:

SubscriptionClient sc = messagingfactory.CreateSubscriptionClient(topicname, subscriptionname, ReceiveMode.ReceiveAndDelete);

BrokeredMessage bm = sc.Receive(new TimeSpan(0,2,0));

As for the Service Bus, the Windows Azure middleware solving the EAI (Enterprise application integration), here is the second way to do the messaging using Brokered Messaging. Clarifying what Brokered Messaging means.

The Brokered Messaging is a durable, asynchronous messaging that has several ways to achieve this like the “Queues”, the “Topics and Subscriptions”, in a way that the senders and the receivers don’t have to be online when the message is send to receive it.

Starting first with the Queues, this way of communication can be used to make the connection between two points, it is totally like the Point-to-Point messaging. For the queues, it is like any normal queue data structure, or like the Windows Azure Queue storage (all the predefined .net functions) the first message sent is the first is to be received by the receiver (FIFO). This feature also works If you do have several receivers of the message through the Service bus.

Our Next Brokered Messaging way is the Topics and the Subscriptions, for the users, they can subscribe to a specific topic and after that can easily get all the messages sent through the service bus related to the subscribed topic.

Following the last 2 posts of the Live in a Cloudy World Paper, the blob storage and the queue storage, this document is for the table storage. It contains all the necessary explanation to understand the Windows Azure Table Storage and its predefined functions.

Waiting for your feedback. J

Now I have previously showed you how to work on Windows Azure using Blob Storage, and went through the Windows Azure Cloud Blob Container Class, let me go through the Cloud Blob Client Class

void testingcloudblobclientclass()


//creating the account using the connection string

//creating the blob client

CloudStorageAccount account = CloudStorageAccount.FromConfigurationSetting(“ConnectionString”);

CloudBlobClient client = account.CreateCloudBlobClient();


//creating the directory, a subdirectory and a uploading a text in a blob




.UploadText(“uploading a text into a subdirectory in windows azure blob storage”);


//getting all the containers details from its list

foreach (CloudBlobContainer y in client.ListContainers())



string containername = y.Name;

NameValueCollection containermetadata = y.Metadata;

BlobContainerProperties property = y.Properties;



//getting the details of the blobs in the list starting by a specified prefix

foreach(CloudBlob h in client.ListBlobsWithPrefix(“test”))



string blobname = h.Name;

BlobProperties p = h.Properties;



Any program is a set of functions that the processor has to execute to give the user the required output.

A function is a set of instructions that the program executes when the function is called by the program at a certain time.

The main function is what the processor search for to start executing the program, in other words, the main function is the first function being executed by the processor. The main function can call then other functions within the program to execute certain action.

To write the program code, you most of the time will execute some predefined functions, like for example the writing and reading function which in C++, we call them cout and cin. These 2 functions were already defined in a library that allows your program to understand the functionalities iostream. I made a very small application, in the following part to help you better understanding how to start writing code. The namespace std is what allows your application to consume the standard functionalities.

For example:

#include <iostream>

using namespace std;

int add(int x, int y)


    return x+y;


int main()


    cout << add(4,4);

    return 0;


V8 JavaScript Engine

Posted: June 14, 2012 in Uncategorized

In my last few blog posts I focused on, Event-Driven Programming, JavaScript and Node.JS. Now let’s see actually this works. The JavaScript Engine is an interpreter that executes the JavaScript. These engines are most of the time used in the web browsers; each one has its own engine like for example the IE has its codename “Chakra” and for Chrome the V8. And here comes the topic of this blog post the V8.

The V8 is a very powerful engine developed by Google; it is mainly developed using C++. The main reason I am talking about the V8 is that the Node JS is based on V8. So I tried to understand how the parallelism of the incoming request is done using it. V8 is an open source engine you can work on it and learn more about it from this link. In the coming part I will try to show out how the V8 engine works and how I think this can help the cloud with solving the problem of handling the increasing number of connections.

The V8 is mainly composed of 3 things: Handle, Scope and Context. I won’t take long explaining them, but for more information you can visit this webpage. However I will explain the 3, I won’t go in deep. The first thing is the handle; this is the one responsible for pointing on the objects. The handle is also divided into 2 things, the local handle and the persistent handle. The first one is created when there is a function call, the persistent one is created and deleted when specified, this handle mainly deals with the DOM.

The Scope is the container of the handles according to this website.

For the Context this is the environment that interprets the JavaScript and executes it, one of the important things it offers is that it allows multiple JavaScript code to run in a single instance of the V8 engine.

After Explaining the main components of the V8 so far, let us try to clarify how this works for the Node JS especially it is based on V8. JavaScript running on the server side, there is no DOM to deal with but only with incoming traffic to execute the required functions and return the result. So I imagine is that with each incoming request rather than executing and creating a new context it allows you some of the incoming traffic to share the same context so executing the functions more faster.

Now redefining what Cloud Computing is, I have posted a previous blog post about it before, however the reason that I am rewriting this is I might get a better understanding for this concept or cover it from other point of views. Remember at the beginning of internet era, the internet users might face some problems one of them is the availability of the application or the website they were looking for. Let’s have a small example why this was happening. When X number of users is trying to access the web application, each connection will consume some temporary memory assuming it is around 2 MB, so the server where the website is hosted must have memory more than X*2 or else the server will be down and won’t be able to handle the incoming connection. So the main solution you must do is to increase and to enhance your hardware, regardless what enhancement is done on the software part, you will always need more hardware.

Let’s jump to another part, for the business owners, the large Enterprise companies and some of the medium will have to build their own datacenter and their IT infrastructure. Each time their business grow, the incoming traffic on their applications whether from their clients to the application hosted on their servers or the internal employees accessing the internal system, they will have to buy new hardware resources for enhancing their IT infrastructure. From the last point there are 3 main problems:

  • Availability
  • Scalability (for the memory, storage and processing power they might not find it when needed due to the limited resources they have)
  • Cost

The only solution and the main one is always enhancing the hardware and buying new servers, regardless of the Software updates. So here comes the Cloud Computing role by providing in my opinion one main thing: “Illusion of the infinite of the hardware resources”, which means that the datacenter will have thousands of servers, maybe millions where the applications are hosted. This means that the cloud can provide you with great flexibility with the storage where you can host your storage and processing power for your applications. As it is impossible for all the companies to have their own Cloud, so there is Cloud providers, like Microsoft , Google, Amazon and several others, that deliver you the Cloud Computing as a service.

Let me give you an example explaining the “as a service” meaning. In each factory, you might build your own electric generators for the high consumption of electricity. This cost the factory a lot of money as a capital cost for building it and after that paying in regular way the operation costs needed for only running this generator. Or simply the factory can take the electricity from the country as a service, they pay for the amount of electricity they use and share it with others.

The same goes for the cloud, rather than paying millions of dollars for just building your IT infrastructure, you can take it as a service from the cloud provider where you share the resources with a lot of others companies. This means that you will cut off your it infrastructure capital cost and only runs with the operation costs, depending on the amount of storage used and amount of processing power consumed. More blog posts will be followed about the layer of the cloud computing and how each layer can benefit its users.