Photograph of Prof. Sugata Sanyal
Home
Research
Publications
Invited Talks
Professional

Details of work done in the area of Network and Security and Parallel Processing and Algorithms:

  1. A Very Simple Approach for 3-D to 2-D Mapping: Many times we need to plot 3-D functions e.g., in many scientific experiments. To plot this 3-D functions on 2-D screen it requires some kind of mapping. Though OpenGL, DirectX etc 3-D rendering libraries have made this job very simple, still these libraries come with many complex pre-operations that are simply not intended, also to integrate these libraries with any kind of system is often a tough trial. This work presents a very simple method of mapping from 3-D to 2-D, that is free from any complex pre-operation, also it will work with any graphics system where we have some primitive 2-D graphics function. Also we discuss the inverse transform and how to do basic computer graphics transformations using our coordinate mapping system. [2007]
  2. Impact of Node Mobility on MANET Routing Protocols Models: A Mobile Ad-Hoc Network (MANET) is a self-configuring network of mobile nodes connected by wireless links to form an arbitrary topology without the use of existing infrastructure. In this paper, we have studied the effects of various mobility models on the performance of two routing protocols, Dynamic Source Routing (DSR-Reactive Protocol) and Destination-Sequenced Distance-Vector (DSDV-Proactive Protocol). For experiment purposes, we have considered four mobility scenarios: Random Waypoint, Group Mobility, and Freeway and Manhattan models. These four Mobility Models are selected to represent possibility of practical application in future. Performance comparison has also been conducted across varying node densities and number of hops. Experiment results illustrate that performance of the routing protocol varies across different mobility models, node densities and length of data paths. [2007]
  3. ACRR: Ad-hoc On-Demand Distance Vector Routing with Controlled Route Requests: Reactive routing protocols like Ad-hoc On-Demand Distance Vector Routing (AODV) and Dynamic Source Routing in Ad-Hoc Wireless Networks (DSR) which are used in Mobile and Ad-hoc Networks (MANETs) work by flooding the network with control packets. There is generally a limit on the number of these packets that can be generated or forwarded. But a malicious node can disregard this limit and flood the network with fake control packets. These packets hog the limited bandwidth and processing power of genuine nodes in the network while being forwarded. Due to this, genuine route requests suffer and many routes either do not get a chance to materialize or they end up being longer than otherwise. In this paper we propose a non cryptographic solution to the above problem and prove its efficiency by means of simulation. [2007]
  4. Mobile Ad Hoc Network Security Vulnerabilities: The routing protocols used in the current generation of mobile ad hoc networks, are based on the principle that all nodes will cooperate. However any node could misbehave. Misbehavior means deviation from regular routing and forwarding protocol assumption. It may arise for several reasons, unintentionally when a node is faulty or intentionally when a node may want to save its resources. To save battery, bandwidth, and processing power, nodes should not forward packets for others. Without any counter policy, the effects of misbehavior have been shown to dramatically decrease network performance. Depending on the proportion of misbehaving nodes and their strategies, network throughput could decrease, and there could be packet losses, denial of service or network portioning. These detrimental effects of misbehavior can endanger the entire network. We deliberate on all these issues in our work. [2007]
  5. An LSB Data Hiding Technique Using Natural Numbers: In this work, a novel data hiding technique is proposed, as an improvement over earlier work in this field. First we mathematically model and generalize our approach. Then we propose our novel technique, based on decomposition of a number (pixel-value) as sum of natural numbers. The particular representation generates a different set of (virtual) bit-planes altogether, suitable for embedding purposes. They not only allow one to embed secret message in higher bit-planes but also do it without much distortion and in a reliable and secured manner, guaranteeing efficient retrieval of secret message. Analysis indicates that image quality of the stego-image hidden by the technique using the natural number decomposition technique improves drastically against that using prime & Fibonacci decomposition techniques. Experimental results show that stego-image is visually indistinguishable from cover-image. [2007]
  6. An LSB Data Hiding Technique Using Prime Numbers: In this work, a novel data hiding technique is proposed, as an improvement over earlier work in this field. First we mathematically model and generalize our approach. Then we propose our novel technique, based on decomposition of a number (pixel-value) in sum of prime numbers. The particular representation generates a different set of (virtual) bit-planes altogether, suitable for embedding purposes. They not only allow one to embed secret message in higher bit planes but also do it without much distortion, with a much better stego-image quality, and in a reliable and secured manner, guaranteeing efficient retrieval of secret message. Analysis indicates that image quality of the stego-image hidden by the technique using the prime decomposition method improves drastically against that using earlier method. Experimental results show that, the stego-image is visually indistinguishable from the original cover-image. [2007]
  7. An Overview of the Evolutionary Trends in Molecular Computing using DNA: This work addresses usage of the biochemical molecule of DNA to solve computational problems. DNA computers use strands of DNA to perform computing operations. The computer consists of two types of strands - the instruction strands and the input data strands. The instruction strands splice together the input data strands to generate the desired output data strand. Since the instruction strands are generic nucleotide codes for the various operations, DNA computers meet the accepted standard for being classed as true computers according to the Turing machine concept. A strong point in favor of DNA computing is the potential storage capacity of DNA, which far exceeds that of the most advanced silicon storage devices. A major challenge that DNA circuit engineers face is the difficulty of predicting circuit performance at the design stage, with the consequence that actual construction requires significant experimental effort, even for very simple circuits.[2007]
  8. Whole Genome Comparison on a Network of Workstations: Whole genome comparison consists of comparing or aligning genome sequences with a goal of finding similarities between them. Previously we have shown how SIMD Extensions used in Intel processors can be used to efficiently implement the, genome comparing, Smith-Waterman algorithm. Here we present distributed version of that algorithm. We show that on somewhat outdated hardware we can achieve speeds upwards of 8000 MCUPS; one of the fastest implementations of the Smith-Waterman algorithm. [2007]
  9. Applying SIMD Approach to Whole Genome Comparison on Commodity Hardware: Whole genome comparison consists of comparing or aligning two genome sequences in hope that analogous functional or physical characteristics may be observed. In this work, we present an efficient version of Smith-Waterman algorithm, for which we utilize sub-word parallelism to speedup sequence to sequence comparison utilizing the Streaming SIMD Extensions (SSE) on Intel Pentium processors. We compare two approaches, one requiring explicit data dependency handling and the other built to automatically handle dependencies. We achieve a speedup of 10-30 and establish the optimum conditions for each approach. [2007]
  10. A Multi-factor Security Protocol for Wireless payment- secure web Authentication using mobile devices : Previous Web access authentication systems often use either the Web or the Mobile channel individually to confirm the claimed identity of the remote user. This paper proposes a new protocol using multifactor authentication system that is both secure and highly usable. It uses a novel approach based on Transaction Identification Code and SMS to enforce extra security level with the traditional Login/password system. The system provides a highly secure environment that is simple to use and deploy, that does not require any change in infrastructure or protocol of wireless networks. This Protocol for Wireless Payment is extended to provide two way authentications. [2007]
  11. SpamWall: Heuristic Filter for Web-Spam: Recent tremendous growth in Web spam, i.e., web pages containing useless contents, interfere with the algorithms that various search engines adopt for indexing and ranking, and should be filtered out. SpamWall analyses the content and structure of web pages based on pre-defined heuristics, and classifies the pages as spam or not. SpamWall can be used by the search engines to filter out pages that should not be indexed. Our implementation of the SpamWall system performed very well. Having trained SpamWall sufficiently, we asked it to crawl the web and classify the pages as it discovers. SpamWall classified about 92% of the pages correctly. [2006]
  12. A Scheme to Control Flooding of Fake Route Requests in Ad-hoc Networks: The vulnerabilities of the routing protocols in mobile ad hoc networks have been identified in earlier work. In this work, we address a specific problem of uncontrolled flooding of route requests in the network by a malicious node. Route request flooding has many implications: 1) Wastage of transmission power of forwarding nodes, 2) High occupancy of limited buffer space by malicious routing packets, 3) Degraded service in the network, as nodes are unable to service non-malicious requests. We propose a non-cryptographic approach to solve this problem. Low processing overhead and simple metrics are the key contribution of this work. Nodes which try to introduce fake requests into the network are blocked and network health is maintained. Thus, the impact of malicious nodes in the network is effectively controlled. The effectiveness of the proposed scheme is verified using rigorous NS-2 simulations whereby we show the route forming efficiency is severely affected in the presence of malicious nodes. We also show that our proposed modification, AODV with Controlled Route Requests (ACRR), helps in significantly improving the count of number of routes formed under similar circumstances. For statistically proving the efficiency of our algorithm, we have repeated the simulations multiple times using different node placements, node mobility and traffic patterns. [2006]
  13. A Semi-distributed Reputation-based Intrusion Detection System for Mobile Ad hoc Networks: We propose a semi-distributed approach towards a reputation-based Intrusion Detection System (IDS) that combines with the Dynamic Source Routing (DSR) protocol for strengthening the defense of a MANET. Our system inherits the features of reputation from human behavior, hence making the IDS socially inspired. It has a semi-distributed architecture as the critical observations of the system are neither spread globally nor restricted locally. The system assigns maximum priority to self observation by nodes for updating any reputation parameters, thus avoiding the need of a trust relationship between nodes. Our system is also unique in the sense that it features the concepts of Redemption and Fading with a robust Path Manager and Monitor system. Simulation studies show that DSR fortified with our system outperforms normal DSR in terms of the packet delivery ratio and routing overhead even when up to half of nodes in the network behave as malicious. [2006]. A Ph D Course Seminar was delivered by Ms. Simoni Shah, on this topic, as part of her course work, under my supervision, on 28-Nov-2006, at TIFR.
  14. A new protocol to counter online dictionary attacks : The most popular method of authenticating users is through passwords. Though passwords are the most convenient means of authentication, they bring along themselves the threat of dictionary attacks. While offline dictionary attacks are possible only if the adversary is able to collect data for a successful protocol execution by eavesdropping on the communication channel and can be successfully countered by using public key cryptography, online dictionary attacks can be performed by anyone and there is no satisfactory solution to counter them. In this work, we propose an authentication protocol which is easy to implement without any infrastructural changes and yet prevents online dictionary attacks. Our protocol uses only one way hash functions and eliminates online dictionary attacks by implementing a challenge–response system. This challenge–response system is designed in a fashion that it hardly poses any difficulty to a genuine user but is extremely burdensome, time consuming and computationally intensive for an adversary trying to launch as many as hundreds of thousands of authentication requests as in case of an online dictionary attack. The protocol is perfectly stateless and thus less vulnerable to denial of service (DoS) attacks. [2006].
  15. The N/R One Time Password System : A new one time password system is described which is secure against eavesdropping and server database compromise at the same time. Traditionally, these properties have proven to be difficult to satisfy at the same time and only one previous scheme i.e. Lamport Hashes also called S/KEY one time password system has claimed to achieve that. Lamport hashes however have a limitation that they are computationally intensive for the client and the number of times a client may login before the system should be re-initialized is small. We address these limitations to come up with a new scheme called the N/R one time password system. The basic idea is to have the server aid the client computation by inserting 'breakpoints' in the hash chains. Client computational requirements are dramatically reduced without any increase in the server computational requirements and the number of times a client may login before the system has to be reinitialized is also increased significantly. The system is particularly suited for mobile and constrained devices having limited computational power. [2005]
  16. Distributed Defense against RREQ Flooding in AODV : In Mobile Ad Hoc Networks (MANET), various types of Denial of Service Attacks (DoS) are possible because of inherent limitations of its routing protocols. Considering the Ad Hoc On Demand (AODV) routing protocol as the base protocol it is possible to find a suitable solution to overcome the attack of the initiating/forwarding fake Route Requests (RREQs) that led to hogging of network resources and hence causing denial of service to genuine nodes. A proactive scheme was proposed that could prevent a specific kind of DoS attack and identify the misbehaving node. Since the proposed scheme was distributed in nature it had the capability to prevent Distributed DoS (DDoS) as well. The performance of the proposed algorithm, in a series of simulations, revealed that the proposed scheme provided a better solution than existing approaches with no extra overhead. [2004-2005]
  17. Detection of Malicious Nodes in MANET : MANET (Mobile Ad Hoc Networks) routing protocols had many security flaws. Malicious nodes can easily disrupt the communication by redirection and by generally not adhering to the routing protocol being used. In AODV (Ad hoc On Demand Distance Vector) routing protocol, a compromised or malicious node might refrain from forwarding data to a correct intermediate node. This can not be checked for in pure AODV protocol.
    To overcome these problems of malicious behavior in the nodes, each node in the route (in the AODV protocol) was provided with information about the forward path. This information helped to verify whether the next-hop node was forwarding the packets on the correct path along the formed route. This involved listening in the promiscuous mode by the mobile nodes to verify correct forwarding of packets.
    The proposed scheme prevented a malicious node from disrupting a route and launching attacks like dropping and misrouting of packets, and thus ensuring that the AODV routing packets as well as the data packets were correctly forwarded. [2004-2005]
    [A related work can be found in: H.Yang and S. Lu, Self-Organized Network Layer Security in Mobile Ad Hoc Networks, First ACM Workshop on Wireless Security (WiSe), 2002. The work here was done independently.]
  18. Forming COUNCIL Based Clusters in Securing Wireless Ad Hoc Network : In Cluster Based Routing Protocol (CBRP), two-level hierarchical structure reduces the over-flooding in wireless ad hoc networks, but is vulnerable to single point failure problem. In such a structure, security issue of compromised node becomes significant. A new adaptive distributed threshold scheme was proposed, to replace the cluster head within each cluster by a group of cluster heads, called COUNCIL, and the service of cluster head to multiple cluster heads using (k,n) threshold secret sharing scheme was distributed. An ad hoc network formed by COUNCIL based clusters could still work properly when the number of compromised cluster heads was smaller than k. To implement this adaptive threshold scheme in wireless ad hoc networks, the clusters should be formed in adaptive way. In this work, the algorithm for formation of COUNCIL based clusters was discussed. [2003-2004]
  19. Vcache: Caching Dynamic Documents :  The traditional web caching is currently limited to static documents only. A page generated on the fly from a server side script might have different contents on different accesses and hence can not be cached. A number of proposals for attacking the problem has emerged, based on the observation that different instances of a dynamic document are usually quite similar in most cases, i.e. they have a lot of common HTML codes. In this work, these related techniques were first reviewed and their inadequacy for practical use was shown. Then a general and fully automatic technique called Vcache, based on the decomposition of dynamic documents into a hierarchy of templates and bindings, was presented. [2003-2004]
  20. Implementing Security with Minimal Performance-Degradation in Computational Grids : An idea to implement security, without appreciable performance-degradation in grids was presented. The security issue of a grid was addressed and solutions were provided for the same. Algorithms for secure key transfer were presented. A suitable alternative to computationally expensive encryption was suggested, which used a key for message authentication. Methods of exchanging the required key were also discussed. [2003-2004]
  21. A Novel Multi-path Approach to Security in Mobile Ad Hoc Networks (MANETs) : A novel encryption-less algorithm to enhance security in transmission of data packets across mobile ad hoc networks was presented. This work hinged on the paradigm of multi-path routing and exploited the properties of polynomials. The first step in the algorithm is to transform the data such that it is impossible to obtain any information without possessing the entire transformed data. The algorithm then used an intuitively simple idea of a jigsaw puzzle to break data into multiple packets where these packets form the pieces of the puzzle. Then these packets are sent along disjoint paths to reach the receiver. A secure and efficient mechanism is provided to convey the information that is necessary for obtaining the original data at the receiver-end from its fragments in the packets, that is, for solving the jigsaw puzzle. The algorithm was designed to be secure so that no intermediate or unintended node can obtain the entire data. An authentication code was also used to ensure authenticity of every packet. [2003-2004]
  22. Multilevel Adaptive Intrusion Detection System : The Real-time Intrusion Detection System (IDS) adopted a radically different approach as compared to current other IDS. The system performed extensive checks by working at both the network and the application layer. It had an agent-based architecture with a unique four-state activity characterization. It detected intrusions, characterized both by an isolated event and by a sequence of events. It evolved a new serial approach of processing audit data by prioritizing intrusion checking over that for regular behavior. A self-developed formulation of system statistics and a weighted priority to system parameters eliminated the problem of false alarms that was seen in current IDS. A different comparison of how close the current activity was to a known intrusion as well as to normal behavior helped to precisely pinpoint and record the characteristics of an intrusion. And the adaptive genetic algorithm not only generated initial user profiles but also updated them to reflect current user behavior. [2001 – 2002] 
  23. Simulation and Performance Evaluation of Switched Routers : Simulation, Construction and working of a router, its interaction with other network components and finally, the evaluation of a possible method to improve its performance were carried out. Apart from the standard work of a router (packet forward, path calculation etc.), role of Router in Congestion Control was implemented, using a method, Random Early Detection, Early Congestion Notification. Multi-protocol Label Switching (MPLS) and possible improvement offered was implemented and different parameters were monitored. It was observed that MPLS offer performance improvement, but only under heavy load. Ever increasing high load on the Internet, which was unlikely to go down, justified the results obtained. [1999 -2000] 
  24. A Distributed Highly Reliable and Available Web-server Cluster : Available clustered architectures for web servers were examined and a new system was designed, providing High Availability (which could serve the maximum possible clients), High Reliability (the degree to which the system could retain its clients when a part of the system failed). It also achieved Load Balancing, by distributing its client requests across multiple nodes within the cluster, transparent to the users, thus offering improvement of server capacity and response. Scalability (in the event of availability of large number of servers) and Dynamic Reconfigurability in the presence of partial faults were also examined. [1999 – 2000]
  25. Theory and Design of t-Unidirectional Error Correcting and d-Unidirectional Error Detecting Code: The basic theory and methods of construction of t-UEC and d-UED codes were developed. Encoding and decoding algorithm were also discussed. It was shown that by assuming t-Asymmetric EC code in place of t-UEC code, 3 check bits could be saved. Some bounds for t-EC d-UED codes were improved. Encoding and decoding procedures for these codes were also discussed. [1994 – 1995]
    (Collaborators: D. K. Ray-Chaudhuri, Ohio State university; N. M. Singhi and P. S. Subramanin, TIFR)
    [Cited in: Asymmetric binary covering codes : Joshua N. Cooper and Robert B. Ellis, Department of Mathematics, University of California at San Diego, La Jolla, California. E-mail: jcooper@math.ucsd.edu , rellis@math.ucsd.edu; And Andrew B. Kahng, Department of Computer Science and Engineering, University of California at San Diego, La Jolla, California, E-mail: abk@cs.ucsd.edu; http://www.math.nyu.edu/~cooper/codes.pdf]. 
  26. Fault-Tolerance in Hypercube-based Parallel Architectures: The n-dimensional binary hypercube (n-cube) is one of the most popular types of interconnection topologies used for the construction of modern parallel computers. One of the main reasons for this popularity is the high connectivity offered by the n-cube. This high connectivity translates into practical possibilities of provision for fail-safe computing on such architectures. Fault-tolerance is a feature that is now part-and-parcel of every architecture, given the growing dependence on and necessity of high-speed computing for critical as well as commercial applications. It was proved that, with high probability, a faulty hypercube could simulate a fault-free hypercube with only a factor of slow down. Thus the n-cube architecture is extremely tolerant of randomly distributed faults.
    We studied the problem of finding the maximum number of faulty nodes that a given n-cube could contain and still remain connected. Our method was dependent on studying a related mathematical problem in extremal set theory. We described methods to obtain optimal solutions for the above mentioned problem and proved optimality in several cases. We also observed that there are a number of upper bounds and gave indications towards proofs for obtaining the least upper bound. [1994-1995]