Cloud Computing – It’s not ‘WHAT’ but ‘HOW’ we do things…


, ,

The concept of cloud itself is one of those things that certainly get lot of hype, but no matter how one looks at it, the fact remains that it’s been used on greater and greater numbers and is becoming a more important topic for people to understand and with that certainty, the security aspect of it also playing an important role. Including what does that means and how do people deal with it, is the topic that people have talked about. Yet, it’s still confusing for some and there is lot of uncertainty in this space.

Industry spend a few months, rustling over exactly how do we describe this ‘thing’ in a distinct way, specially to those people who have not been thinking about it. So, the numbers of definitions that exist are many and varied, yet the way one could look at this is, “Cloud Computing is not necessarily about ‘What’, it’s more about ‘how we do things’”. There is no need of new technology when it comes to cloud computing, as essentially most of it is existing technologies just ‘re-mixed’ and a real change in terms of, how consumers acquire that technology , how its provisioned in kind of internally, how it’s used and whether or not the consumer actually owns the computing technology that they are accessing. Also a great deal of what cloud is, was really kind of signaled and opened up from a market perspective by ‘Virtualization’ and the concept of abstracting applications from the physical infrastructure, really giving a hand to the enterprise understanding and opening their eyes, as well as bunch of vendors to be fair as to what the cloud can do. So ‘VMware’ clearly had an impact on the market and far another aspect of internet services that come from the consumer world have also kind of signaled the new model of computing and clearly Google has got the market and demonstrating what you can do with the scalable highly efficient infrastructure used to serve in and all of your application over the wire.
So, we could pile it up as there’s not much of work included yet, there are some new technologies specially around provisioning, scaling and measuring aspects of computing but the reality is we are changing a lot of ‘how people use computing’ and that’s a great way to think about cloud computing.

Another way to view Cloud is to think of the Spectrum. Many times its seen, lots of people querying around ‘how would they define Infrastructure as a Service’ (IaaS), ‘Platform as a Service’ (PaaS) & ‘Software as a Service’ (Saas)’ and as seen in today’s time they have just too many of ‘as a service’ acronyms, one of them being ‘XaaS’ which indicates ‘Everything as a service’. The best way to think about some of these things, or ways to identify what are you dealing with or making out which bucket it fits into, is to really look which layer of computing stack you are abstracting. Infrastructure guys (i.e. Infrastructure as a Service) really focus on core computing infrastructural level technologies. So looking at Amazon’s AWS, EC2 and S3, they are really taking a server and making that server into an abstracted entity that you can access over the wire as well as the disk drives. So, they are completely going after the compute ware and there are companies like ‘Gogrid’ who is doing the same thing lying within the Microsoft framework. We can see many other firms coming up into this category such as Linode, Rackspace etc. When we move up to ‘Platform as a Service’ , one is really making the decision of application framework, that one is looking at and actually people need to understand this, that when they approach ‘Platform as a Service’, they are making a very strategic decision about how they are going to architect whatever application they are going to run and if they make the choice to give it a ‘’ or ‘Azure’ or ‘AppEngine’ or an ‘Engine Yard’ it’s known they are really getting married to this platform. So, in a way we could call ‘Platform as a Service’ as one of the most interesting areas of cloud computing market. And the last area ‘Software as a Service’, clearly shows what these guys are bringing in, is that layer of business logic on top of the application framework, are really driving business values. It signifies that it really speaks to the fact that, this is where the translation of computing into value for the business occurs and that’s why tremendous action is seen with lot of excess marketplace. Moreover, if one lookup the startups as well some established players playing, the growth of this marketplace is pretty significant.

# This published post can also be viewed here.
[+] This post is a part of my ongoing research paper on ‘Cloud Computing: Privacy & Security’.


Social Networking: Privacy and Security…


, , , ,

According to the reports on daily journals & magazines, some countable group of ‘US privacy and consumer protection groups’ filed a complaint with US Federal Trade Commission (FTC) accusing Facebook of “unfair and deceptive” practices. Moreover, they called on the FTC to further investigate Facebook’s privacy practices and force it to take steps to guard better against security breaches.

Above this, daily getting multiple queries on ‘accounts getting hacked’,’identity thefts’,’spreading hoax aka spams’ through the social networking portals… The sole thing i would like to say as a Security consultant as well as a, well versed Web 2/3.0 user is, *You could never get hacked by anyone, till you let it…*, as the techniques perpetrators use, to get through your account, is ‘your’ online published captivating content about yourselves and a bit of ‘social engineering tactics’ which informally means, “To indulge a user with fake identity, to gain user’s ‘trust’ and aftermath getting valued information through continuous interaction from the user”.

In response to the complaints by various ‘consumer protection groups’, the social networking company has added several new security tools to help prevent hacking and increased privacy options. Yet, no matter what FTC finds or what social networking firms add up, a perfectly better approach to ‘user security and privacy’ would be to ensure whether the users are aware of social networking risks and accountable for what types of information are they willingly sharing.

Some general best practices added with some instigating common sense that users should get aware with, includes:

  • Keeping your ‘personal information’ to yourself: Never post your ‘full name’, social security number, address, phone-number, and other credential numbers beholding your accounts or other personal objects. Be cautious about posting information that could be used to identify you offline (school, college, work place etc…).
  • Post only those information that you are comfortable with others seeing and knowing about you. Keep in mind many people can see your page (except your accepted friends).
  • Remember that once you post the information online, you can’t remove it. Even if you delete the information from a site, older versions exist on other machines and cache mainframes.
  • Read the Privacy Guide of Social networking portals. At the bottom of every page, there would be a link for “Privacy”. This page contains the latest privacy functions and policies set up by the firms which helps you ensuring your privacy settings setup.
  • Choose your ‘Friends’ carefully. Once you have accepted someone as your friend, they will have access to any information about you (including photographs) ‘that you have marked as viewable by your friends’.

Organizations, corporates and institutions should also find better ways to provide ongoing safety awareness, to people understand escalating risks and threats lurking online if they are willingly sharing too much credential information which may captivate intruders.

Individual users need to be more accountable for securing their sensitive and personal credentials. Is it a ‘networking portal’s’ responsibility, if users decide to post their ‘valuable credentials’ or share their credit card number online?

#This published post can also be viewed here.


What does a processor does when it doesn’t spends time in ‘USER ACTIVITIES’…



The most typical case one may have observed with the processor is when it spends time with user tasks. If not then what may it do..???
Lets have a glimpse of what all ‘plays’ are running around with kernel code, when the processor is out of hybrid user tasks…

  • System call – When a *user* task or any specific kernel thread requests some service from the heart of the OS aka kernel, it does a trap to the privileged mode and some kernel code does the requested service. System call handlers usually return an integer, by convention if the return value is a small negative integer (-1 .. -515), it signifies the system call is returning an error (absolute value of it should be one of the constants from ‘errno.h’). Other values = successful system call completion. If you i.e. user returns an error, application will usually get a positive error code in errno and the system call will return -1 to the application.
  • Exception handling – When some instruction in user-code or kernel raises some exception(which one could observe usually), the kernel has to handle it as well. Unlike system call, this action is not requested by the user directly. The most typical member of this category is ‘page table’ miss. Most of these exception handlers reside in platform dependant files, some of them then call some generic handling (like handle_mm_fault in our case).
  • Kernel thread – There are many special processes which execute in only kernel space (eg init : one of the roots of a process processes), and use standard trap method of invoking system calls when necessary. When booting, kernel spawns several kernel threads, some of them then do execute system call by which they loose the kernel thread state and become normal processes, which is the case of e.g. init.
  • Interrupt – When some hardware requests some action, it sends an interrupt to the kernel, which in turn calls some interrupt handler, if ‘someone registered’ it. This interrupt handler should be fast, so that it does not lock the system for too long – interrupt handlers are usually executed with disabled interrupts on local CPU. Interrupts normally execute in context of the task which has been current on the IRQ servicing -> CPU at the time when the interrupt came, so there is no interrupt thread or something like that. Each thread has kernel space mapped as part of it’s address space, so unless you access user space (you should never try that, *till you are that nerd*), it does not matter in which task context are you executing. Every interrupt has an assigned interrupt number which you use e.g. in calls to enable_irq() or disable_irq() (to disable that particular interrupt). The exact interpretation of this number is platform dependent and device driver writer should not assume anything about its value, it should be just a 32bit integer with unknown value for it. Never use this value to index into static arrays, it might work on one platform, but break on another one. Some architectures have different interrupt numbers just for the different interrupt levels, some encode board and slot numbers into it (thats good in a way for recognition *not in a enterprise environment* ). Like that, on some platform a disable_irq() can disable just one interrupt level from a certain card on a certain bus, while on other platform, where the IRQ handling is not advanced that much yet, it just disables a certain interrupt level on all CPUs.
  • Bottom half handler – So that you don’t block interrupts on local CPU for too long, you can do some part of your interrupt handling in the bottom half handler viz. add some functions for later processing. Normally, in your short fast interrupt handler, you call mark_bh() if you need some longer processing. Then, when your interrupt handler is done, the system checks for pending bottom half handlers. If it finds some pending, it enables interrupts and executes them. Normally, to use a bottom half handler from within your interrupt handler, you have to allocate a new bottom half handler type , in your initialization register; your bottom half handler with its type (init_bh()) and then in your interrupt actually use it (mark_bh()). Now, this is probably not a good solution, as there are only 32 bottom half handlers available for registering, half of them in use by other parts of the kernel already (therefore make a wise decision, so not to regret afterwards). Another, and much nicer way how to run your bottom half handlers and not waste precious global bottom half types comes in as task queues or ‘tasklets’. This has nothing to do with tasks as execution entities, here task means a function with some arbitrary argument which you can schedule for later execution or in more precise words a Sub-categorization to Interrupt handling.


Under illumination variations, exploiting 3D image for ‘Face Authentication’ in Biometrics…


, ,

Under illumination variations, exploiting 3D image for ‘Face Authentication’ in Biometrics…

Automatic recognition of human faces is extremely useful in a wide area of applications, such as face identification for security and access control, surveillance of public places, mug shot matching and other commercial and law enforcement applications.

The majority of face recognition techniques employ 2D grayscale or color images. Only a few techniques have been proposed that are based on range or depth images. This is mainly due to the high cost of available 3D digitizers that makes their use prohibitive in real-world applications. Furthermore, these devices often do not operate in real time or produce inaccurate depth information.

A common approach towards 3D face recognition is based on the extraction of 3D facial features by means of differential geometry techniques. Facial features invariant to rigid transformations of the face may be detected using surface curvature measures. The combination of 3D and gray-scale images is addressed in here, but 3D information is only used to aid feature detection and compensate for the pose of the face.

The most important argument against techniques using a feature-based approach is that they rely on accurate 3D maps of faces, usually extracted by expensive off-line 3D scanners. Low cost scanners however produce very noisy 3D data. The applicability of feature based approaches when using such data is questionable, especially if computation of curvature information is involved. Also, the computational cost associated with the extraction of the features (e.g. curvatures) is significantly high. This hinders the application of such techniques in real-world security systems. The recognition rates claimed by the above 3D techniques were estimated using databases of limited size and without significant variations of the faces. Only recently conducted an experiment with a database of significant size containing both grayscale and range images, and produced comparative results of face identification using eigenfaces[i.e. these are a set of eigenvectors used in the computer vision problem of human face recognition] for 2D, 3D and their combination and for varying image quality. This test however considered only frontal images with neutral expression, captured under constant illumination conditions.
Apart from the combination of 2D and 3D information under background clutter, occlusion, face pose variation and harsh illumination conditions one could exploit depth information and prior knowledge of face geometry and symmetry. Furthermore, unlike techniques that rely on an extensive training set to achieve high recognition rates, crack requires only a few images per person.

Acquisition of 3D and Color Images
A 3D and color camera capable of real-time acquisition of 3D images and associated color 2D images is employed . The 3D-data acquisition system, which uses an off-the-shelf CCTV-color camera and a standard slide projector, is based on an improved and extended version of the well-known Coded Light Approach (CLA) for 3D-data acquisition. The basic principle lying behind this device is the projection of a color-encoded light pattern on the scene and measuring its deformation on the object surfaces. By rapidly alternating the color coded light pattern with a white-light pattern, both color and depth images are acquired. The average depth accuracy achieved, for objects located about 1 meter from the camera, is less than 1mm for an effective working space of 60cm × 50cm × 50cm, while the resolution of the depth images is close to the resolution of the color camera.
The acquired range images contain artifacts and missing points, mainly over areas that cannot be reached by the projected light and/or over highly refractive (e.g. eye-glasses) or low reflective surfaces (e.g. hair, beard). Some examples of images acquired using the 3D camera can be seen in Figs. 1, 2, 3 and 4. Darker pixels in the depth map correspond to points closer to the camera and black pixels correspond to undetermined depth values.

Face Localization
A highly robust face localization procedure is proposed based on depth and color information. By exploiting depth information the human body may be easily separated from the background, while by using a-prior knowledge of its geometric structure, efficient segmentation of the head from the body (neck and shoulders) is achieved. The position of the face is further refined using brightness information and exploiting face symmetry.

Separation of the body from the background is achieved by computing the histogram of depth values and estimating the threshold separating the two distinct modes. Segmentation of the head from the body relies on statistical modelling
of the head -torso points in 3D space.

The probability distribution of a 3D point x is modelled as a mixture of two

P(x) = P(head)P(x|head) + P(torso)P(x|torso)

In this case, these may be obtained by exploiting prior knowledge of the body geometry.

The above clustering procedure yields inaccurate results when biased by erroneous depth estimates, i.e. occluded parts of the face. Therefore, a second step is required that refines the localization using brightness information. The aim of this step is the localization of the point that lies in the middle of the line segment defined by the centers of the eyes. Then, an image window containing the face is centered around this point, thus achieving approximate alignment of facial features in all images, which is very important for face classification. The technique proposed exploits the highly symmetric structure of the face. The estimation of the horizontally oriented axis of bilateral symmetry between the eyes is sought first. Then, the vertically oriented axis of bilateral symmetry of the face is estimated. The intersection of these two axes defines the point of interest.

Simulating Illumination
Another source of variation in facial appearance is the illumination of the face. The majority of the techniques proposed to cope with this problem exploits the low dimensionality of the face space under varying illumination conditions. They either use several images of the same person recorded under varying illumination conditions or rely on the availability of 3D face models and different maps to generate novel views. The main shortcoming of this approach is the requirement in practice of large example sets to achieve good reconstructions.
Our approach on the other hand builds an illumination varying subspace by constructing artificially illuminated color images from an original image. This normally requires availability of surface gradient information, which in our case may be easily computed from depth data. Since it is impossible to simulate all types of illumination conditions, we try to simulate those conditions that have the
greatest effect in face recognition performance. Heterogeneous shading of the face caused by a directional light coming from one side of the face, was experimentally shown to be most commonly liable for misclassification. Given the surface normal vector N computed over each point of the surface, the RGB color vector Ia of a pixel in the artificial view is given by:
Ia = Ic(ka + kdL ・N) (5)
where Ic is the corresponding color value in the original view, and ka, kd weight the effect of ambient light and diffuse reflectance and L representing the direction of artificial light source, respectively. Fig. 4 shows an example of artificially illuminated views.

Thus, an attacker can edit and adjust the lighting and angle of a ‘phony’ photo to ensure the system will accept it. Due to the fact that a hacker doesn’t know exactly how the face learnt by the system looks like, he has to create a large number of images…let us call this method of attack ‘Fake Face Brute force.’ It is just easy to do that with a wide range of image editing programs at the moment. And lets face the fact, they are not ‘so’ secure.

So, in hand the biometric systems use ‘not so good image capturing devices, like systems of Lenovo, Toshiba and Asus’ + { a local face localized method + a basic illumination technique } = which can just be brute-forced through embedded programming techniques and essential path-breaking hardware and wide range of image editing programs.


A glimpse on how ‘PHISHERS’ take over Corporate Network initializing with ‘Social Networks’…


, , ,

“Hey Alice, look at the pics I took of us last weekend at the picnic. Bob”. That Facebook message, sent last fall between co-workers at a large U.S. financial firm, rang true enough. Alice had, in fact, attended a picnic with Bob, who mentioned the outing on his ‘Facebook’ profile page.

So Alice clicked on the accompanying Web link, expecting to see Bob’s photos. But the message had come from some ‘bad guys’ who had hijacked Bob’s Facebook account. And the link carried an infection. With a click of her mouse, Alice let the attackers usurp control of her Facebook account and company laptop. Later, they used Alice’s company logon to slip deep inside the financial firm’s network, where they roamed for weeks. They had managed to grab control of two servers, and were probing deeper, when they were detected.

Now let us see this closely, on how did this layered approach by so called ‘not-so-good guys’ tricked to get into the corporate network.

The attack on the picnicking co-workers at the financial firm illustrates how targeted attacks work. Last fall, attackers somehow got access to Bob’s Facebook account, logged into it, grabbed his contact list of 50 to 60 friends and began manually reviewing messages and postings on his profile page. Noting discussions about a recent picnic, the attackers next sent individual messages, purporting to carry a link to picnic photos, to about a dozen of Bob’s closest Facebook friends, including Alice. The link in each message led to a malicious executable file, a small computer program.

Upon clicking on the bad file, Alice unknowingly downloaded a rudimentary keystroke logger, a program designed to save everything she typed at her keyboard and, once an hour, send a text file of her keystrokes to a free Gmail account controlled by the attacker. The keystroke logger was of a type that is widely available for free on the Internet.

The attackers reviewed the hourly keystroke reports from Alice’s laptop and took note when she logged into a virtual private network account to access her company’s network. With her username and password, the attackers logged on to the financial firm’s network and roamed around it for two weeks.

First they ran a program, called a port scan, to map out key network connection points. Next they systematically scanned all of the company’s computer servers looking for any that were not current on Windows security patches. Companies often leave servers unpatched, relying on perimeter firewalls to keep intruders at bay. The attackers eventually found a vulnerable server, and breached it, gaining a foothold to go deeper.

A short time later, the attackers were discovered and cut off. One of Bob’s Facebook friends mentioned to Bob that the picnic photos he had sent had failed to render. That raised suspicions. A technician took a closer look at daily logs of data traffic on the company’s network and spotted the vulnerability scans.The attack lasted as long as 2 weeks. If the attackers’ vulnerability scans had not been so “noisy”, they may not have been noticed, and the company could have suffered severe losses in terms of costly data breaches and corrupted databases, as well as system repairs.

What’s interesting in this story is that the initial attack on the employees’ Facebook friends is pretty hard to defend against, since nothing seemed out of the ordinary. There really was a corporate picnic!

So, before clicking on a link, provided by your friends, do grill your cerebrum to have precautive measures like cross-checking the message by the sender etc., as you may not be the only one who would be affected , yet you will be ‘leading the hierarchy’ !