Archive for June, 2012

HTTP Protocol

Posted: June 23, 2012 in Live in a Cloudy World

The Http is one of the main web protocols, used to access the websites content. When you surf the internet and enter www.example.com, a defined port is always assumed which is the port number 80 for the hypertext transfer protocol , if you want for any certain reason to change the port, all you have to do is to enter the URL followed by “:<port-number>” (www.example.com:73 ). The HTTP protocol initiates the TCP connection to the port number 80 and waits for the incoming requests from the web clients.

The web clients might send different methods to get specific function for a better consumption for the internet connection. For example for developing applications targeting different web browsers, you might first detect what the type of the web client browser’s then respond with the right document or file. Some of them might also be used for defining the data available in the website, what we call the meta-information. This would be better for the search for a specific type of data in the available websites, rather than searching the whole website or its body.

Some of the available methods guarantee the user and the server some functionality, like the GET, HEAD, OPTIONS, TRACE that their main objectives is to retrieve the data, files and documents from the server. The other functions from POST and PUT are for the users actions done on the document hosted on the server. Just for listing the available methods, actually I got them from Wikipedia:

  • Head for getting the Meta information rather than accessing the body and download the whole website
  • Get used for only retrieving data content from the website and it doesn’t have any more function than that.
  • Post is used for submitting data to the specified resource.
  • Put is used for uploading a representation to the resource.
  • Delete is used for deleting the resource
  • Trace is used for echoing back the received requests, so that the client can check if any modifications are made by the intermediate servers.
  • Options returns what methods the server can return.
  • Connect converts the request system to a transparent TCP/IP, usually to facilitate SSL-encrypted communication (HTTPS) through an unencrypted HTTP proxy
  • Patch is used to partially modify a resource.

Mutli-Tenancy

Posted: June 23, 2012 in Virtualization

Designing an application for the clients, you will always have to put into considerations how their data and application will be maintained. You will also have to put in mind that your application can be customized for your clients, from the user interface to some process and removed or added new functionalities. The multi-tenancy concept is how you can create multiple instance of your application for different clients in a way that each client can access his own customizable application and can only access his own data.

There are a lot of benefits for the Multi-Tenancy, in the past for hosting an application that multiple clients can use; they had to host each client’s application on separate hardware. With multi-tenancy, they all share the same application and they can make some changes in it according to their needs, which reduce the infrastructure cost and increase the level of security to make sure their data is not merged with others.

A lot of my previous post had “LIVE IN A CLOUDY WORLD” on it, and there are a lot of people who asked me what that is. Now as you surely have noticed I love and believe in the Cloud Computing in a way that I have no problem to spend all my time searching on it and trying to implement new things on it, test and so on. Actually that I am trying to do now, rather than wasting my time and got nothing back from it, and rather than spending my time studying things from the university that I don’t like and I don’t see that there are any benefits from it, I decided to start digging and do the researches I want in the field that I wish to work in. So any related work, post, application that I post on the internet related to the field will have a category or a hashtag on twitter #LiveinaCloudyWorld, or on facebook, wordpress, linkedin or even Paper.li.

As I am Microsoft Student Partner, I am Microsoft oriented and you may find a lot of posts related to Microsoft Technology, however I am doing my best to cover the field of Cloud Computing from different point of views and different technology implementation for the concept.

What exactly will I cover in “LiveinaCloudyWorld”? actually and frankly everything related to the field of Cloud Computing that I dig in will be posted, this might be that I am so excited about it, or I like the field but I am still discovering more about it. Basically you can find some introduction about Cloud Computing, the infrastructure from the processor, hardware and the Operating system, the platform especially Windows Azure and will try to cover others, and will try to get in the software as a service but I am not really interested in it.

You may also find several posts related to other technologies or concept and how this implemented on the different Cloud Computing platforms.

Starting to build my own cloud from scratch, as I am Microsoft Student Partner, I will start by the platform Windows Azure built on, which is Intel Microprocessor. I will start by how the Virtualization is executed on the processor level. I am sure every developer, especially those who develop using C/C++ languages are familiar with what is the data structure and why are we using it in our application. For the virtual machines, there are some structures that you have to know about, Virtual Machine Control data Structure (VMCS), you won’t find the VMCS, and the processor must have the VMX extension to allow you as developer to be able to build VMs on it or play with its internals.

The VMX extensions allow you to build 2 kinds of VMs on the processor, the first called the VMM (Virtual Machine Monitor) and the second is the Guest OS. We can know from their names that the VMM is made for the full control of the infrastructure and the hardware platform; it acts as it is the host. The Guest VM acts like a stack for the guest OS and applications, it acts and executes normally as an application with no VMM using the shared resources. Just to point out that each VM runs independently than the other VMs sharing the same resources.

So how that works and what the VMM can do to the Guest application?

The main 2 transactions made between the VMM and the guest VM is when the guest VM starts or when it exits. When the VMM send the instruction VMLAUNCH or VMRESUME, it does something like releasing the guest VM and it regains back control on it when this guest VM exits.

for the Processor, each logical processor can handle only one VM at a time that doesn’t mean that It cannot support more than that but each VM has something called the launch state which defines which VM is active and which is inactive. For the actives one, the logical processor executes the VM with the current state.

 

 

 

Finally I got windows server installed on one of my computer, continuing my research on the cloud computing. Now I have installed the Windows Server 8 Hyper-V, I will show you how to manage it and configure the virtual machines created on your own private Cloud. So for those of you who don’t know Windows PowerShell here is one of my latest blog posts, I have already done, describing it and how can you benefit from it. Just to mention my version of Windows Server 8 is: Windows Hyper-V server 8, it is a bit different than the others; here you can do all the required things using Cmdlet.

First of all you have to get the PowerShell for Hyper-V, which extend and give you more capabilities than the PowerShell already installed on your Hyper-V. You can get the additional libraries from this website. After you download the Zip file and unzip it, you can now install it on your Hyper-V just by changing the directory to where the file is unzipped and then enter the following command or call the file of the installation: install.cmd. You can monitor the installation and at the end you can see the PowerShell window opened at the end.

You can get all the command by simply entering the following command in the PowerShell window:

    Get-Command –Module -HyperV

Now we have the PowerShell installed, so let’s create our first VM using the PowerShell Libraries. The steps are so simple you can create VM on the server you are working on by simply typing the following command:

    New-VM <VM-Name>

For creating the VM on different server you have to define its name after the VM name.

    New-VM <VM-Name> “<Server-Name>”

I have talked about Cloud Computing in a previous blog post; you can read it for more information. In this one I will start explain the first layer of the Cloud Computing which is the infrastructure. This layer is the physical one. This layer mainly consists of servers, networks, facilities, cooling system, electricity, every and any necessary thing to build a datacenter. This layer mainly targets the IT and the operators who are always looking forward for a host to host their applications and data.

The infrastructure requires an operating system to be installed on it to take control along the whole infrastructure, there are a lot of OS can be installed on these infrastructure like Windows Server and Linux, Unix, AX… etc. The target of the infrastructure may oblige you to install on it some kind of OS like for example, if this infrastructure is made for the consuming its processing power for research or whatsoever, the area of research will be around the HPC OS(High Processing Computing). There are some companies like Microsoft, that help you build your own private cloud to maximize the utilization of the hardware resources the company has. This is done by several products not only an OS but with system center. In my opinion, the era for the private cloud won’t endure for a long time but it is just for a certain time until the Cloud Computing gains more and more confidence, after that I think that all the companies from large enterprises to the small one will move to the public cloud.

The public Cloud Computing offers you now the infrastructure as a service, cutting off the IT infrastructure cost for all the rising and the large enterprises companies. They can host their data on the cloud infrastructure and pay only for what they use from processing power, storage and other services.

So how the Cloud Provider prevent you from accessing other customer’s data and give you in the same time more control to manage your data on his infrastructure? The answer is quite simple: Virtualization. I wrote a blog post about it before, and I am willing to go in it step by step from the virtualization level to a more advanced one, how to build your VM.

When we go buying any new laptop these days we all hear about the cores, Intel core2 due, quad core … etc. Also for the computer developers they know about the cores and might heard about the threads, actually I am not talking about the threads in the development but on the processor threads.

The explanation is too easy; consider the number of the cores is the number of the physical processor that works in parallel way and the number of the threads as the number of the logical processor. The threads or the logical processor is used to elevate and to execute more instructions, we can consider that each thread execute a specific instruction, if the processor has 2 threads, it can executes 2 instructions in the same time.

Nowadays, the processor may have actually more than one physical processor and in each physical processor there might be a number of threads that share some of the components of this core, like for example the ALU, the execution engine. For the physical processors they might share some components like the bus interface that transfers the data and instructions from and to the memory.

V8 JavaScript Engine

Posted: June 14, 2012 in Uncategorized

In my last few blog posts I focused on, Event-Driven Programming, JavaScript and Node.JS. Now let’s see actually this works. The JavaScript Engine is an interpreter that executes the JavaScript. These engines are most of the time used in the web browsers; each one has its own engine like for example the IE has its codename “Chakra” and for Chrome the V8. And here comes the topic of this blog post the V8.

The V8 is a very powerful engine developed by Google; it is mainly developed using C++. The main reason I am talking about the V8 is that the Node JS is based on V8. So I tried to understand how the parallelism of the incoming request is done using it. V8 is an open source engine you can work on it and learn more about it from this link. In the coming part I will try to show out how the V8 engine works and how I think this can help the cloud with solving the problem of handling the increasing number of connections.

The V8 is mainly composed of 3 things: Handle, Scope and Context. I won’t take long explaining them, but for more information you can visit this webpage. However I will explain the 3, I won’t go in deep. The first thing is the handle; this is the one responsible for pointing on the objects. The handle is also divided into 2 things, the local handle and the persistent handle. The first one is created when there is a function call, the persistent one is created and deleted when specified, this handle mainly deals with the DOM.

The Scope is the container of the handles according to this website.

For the Context this is the environment that interprets the JavaScript and executes it, one of the important things it offers is that it allows multiple JavaScript code to run in a single instance of the V8 engine.

After Explaining the main components of the V8 so far, let us try to clarify how this works for the Node JS especially it is based on V8. JavaScript running on the server side, there is no DOM to deal with but only with incoming traffic to execute the required functions and return the result. So I imagine is that with each incoming request rather than executing and creating a new context it allows you some of the incoming traffic to share the same context so executing the functions more faster.

Now redefining what Cloud Computing is, I have posted a previous blog post about it before, however the reason that I am rewriting this is I might get a better understanding for this concept or cover it from other point of views. Remember at the beginning of internet era, the internet users might face some problems one of them is the availability of the application or the website they were looking for. Let’s have a small example why this was happening. When X number of users is trying to access the web application, each connection will consume some temporary memory assuming it is around 2 MB, so the server where the website is hosted must have memory more than X*2 or else the server will be down and won’t be able to handle the incoming connection. So the main solution you must do is to increase and to enhance your hardware, regardless what enhancement is done on the software part, you will always need more hardware.

Let’s jump to another part, for the business owners, the large Enterprise companies and some of the medium will have to build their own datacenter and their IT infrastructure. Each time their business grow, the incoming traffic on their applications whether from their clients to the application hosted on their servers or the internal employees accessing the internal system, they will have to buy new hardware resources for enhancing their IT infrastructure. From the last point there are 3 main problems:

  • Availability
  • Scalability (for the memory, storage and processing power they might not find it when needed due to the limited resources they have)
  • Cost

The only solution and the main one is always enhancing the hardware and buying new servers, regardless of the Software updates. So here comes the Cloud Computing role by providing in my opinion one main thing: “Illusion of the infinite of the hardware resources”, which means that the datacenter will have thousands of servers, maybe millions where the applications are hosted. This means that the cloud can provide you with great flexibility with the storage where you can host your storage and processing power for your applications. As it is impossible for all the companies to have their own Cloud, so there is Cloud providers, like Microsoft , Google, Amazon and several others, that deliver you the Cloud Computing as a service.

Let me give you an example explaining the “as a service” meaning. In each factory, you might build your own electric generators for the high consumption of electricity. This cost the factory a lot of money as a capital cost for building it and after that paying in regular way the operation costs needed for only running this generator. Or simply the factory can take the electricity from the country as a service, they pay for the amount of electricity they use and share it with others.

The same goes for the cloud, rather than paying millions of dollars for just building your IT infrastructure, you can take it as a service from the cloud provider where you share the resources with a lot of others companies. This means that you will cut off your it infrastructure capital cost and only runs with the operation costs, depending on the amount of storage used and amount of processing power consumed. More blog posts will be followed about the layer of the cloud computing and how each layer can benefit its users.

Node JS on Windows Azure

Posted: June 8, 2012 in Uncategorized

One of the latest things people have heard about is Node JS, actually when you develop on any cloud platform available, you will certainly hear about it and you may think of using it. So first of all what is Node JS and what can it do more to help you building your application? We all know that JS stands for JavaScript and for the Node, it means the server side. Yes JavaScript on the server side. The Node JS main target is to help you build very high scalable applications over the network… bla bla bla. How is that?

Let’s imagine that you get a number of X of requests on your web application, each one of these requests consume a certain amount of memory, so your hardware resources will only be able to handle the number of memory divided by X, which is somehow limited and costs a lot. (I got this example from this link from ibm developers works). Node JS allows you to execute the coming request with more parallelism using the JavaScript
Event-Driven programming language. In this post we will talk about the same thing but for the server side.

Node is a server-side JS interpreter, it changes all how the server works processing and working with the coming request. Node.JS is a based on V8 JavaScript engine, ultra-fast engine, you can download it, read its documentation and embedded into any of your application from this link.

Now let’s go into the example, first of all after the new features added for Windows Azure for June 2012, we will need to sign in to the portal, create new website and setting its URL and its GIT, you can download and install the Node.JS tools and GIT from these 2 links. Let’s start by opening the Windows Azure PowerShell, and don’t forget to run it as administrator. The coming few instructions will help you building your web application, just write the written commands in your PowerShell Command lines.

After creating the directory, you can now create the windows azure application. But first don’t forget to change directory to the directory created.

 

If you opened the directory you made and where you created the azure service, you will find the following:

So let’s create the new WebRole for our application.

If you didn’t name the WebRole it will be named webrole1. You can also see the folder created and the files added just by navigating to the browser, you must see the following in the folder named mywebrole.

You can open, modify the server.js by simply entering the following command, you will be able to see the server.js in the notepad as the following picture shows.

 

Let’s run and see our application result

The result should be like the following picture:

To modify the application or add new pages you can add new JavaScript or modify the server.js already created.

Now let’s upload to our azure accounts, supposing you have download the Git, created a new website and set the Git Publishing credentials. To do so, after creating a website application on the Windows Azure Portal you will find a link called set Git Publishing on the right most of the window, after clicking on this link:

Now depending on what you have chosen during the installation, I mean I have chosen the Git Bash, you may have gave it the permission so you can access it through the PowerShell. In both ways, you will choose the same commands.

Make sure you are in the right directory and start initializing GIT by the following commands

Now continue and enter the following command to make the connection and the deployment of the application to the portal.

You can get this URL from your portal from:

The last command line is

Now if you simply go to the link of your application you will find the hello world we create J