Opportunities      |     HPCC     |     Project Registration 
SRM Research Institute, Bangalore

SRM Research Institute, Bangalore is focused on several industry verticals and domains to study, research, and develop techniques that solve market critical problems. In particular, the current areas of focus are as follows:


Domain of Education

Technology has advanced many folds in multiple directions such as in terms of tools and gadgets, connectivity, availability, and ease of use. Advancements in technology have had only limited impact on an educational system, and there is an immediate need as well as scope for deriving the benefits of technology. Technology has a key role to play in addressing the challenges of the educational system, say, by providing a pervasive collaborating and interactive environment. The trends of globalization and technological changes that have transformed the workplace provide an opportunity to create the educational system that will work in the new world.

In order to drive technology into the educational system an extensive adoption of a range of presentation tools, including computers, interactive whiteboards, and handhelds, as well as digital management and Web 2.0 solutions is essential. Futuristic educational system includes high end technology implementation such as like smart card-enabled lockers, a bookless library, and smart classrooms, new types of instructional software applications to aid students with differential abilities. Thus, technology plays a role in introducing basic wireless system to remote presence based educational system and in bringing out new innovations like student specific PCs and it is pertinent to launch high end technology projects widely as part of the educational system.

Various forums and groups are focused on enabling the educational system to use technology. There are professional organizations such as International Technology Education Association for technology, innovation, design, and engineering educators, Associations of educators such as Association for Educational Communications and Technology that strive to improve instruction through technology, and The Association for the Advancement of Computing in Education, an international, not-for-profit, educational organization, with the mission of advancing Information Technology in Education and E-Learning research. An active forum that addresses needs of diverse players in the education system is International Society for Technology in Education that strives to advance excellence in learning and teaching through innovative and effective use of technology and has brought out National Educational Technology Standards that help measure proficiency and set goals for what students, teachers and administrators should know and be able to do with technology.

It is the need of the hour to develop tools and techniques that can help bring technology very naturally and necessarily into the domain of education.

Our research objective is to provide technological solutions to address several student-centric needs covering a horizon from rural primary schools to higher learning universities.


Domain of Wireless Technology

The wireless technology has a far-reaching influence on our society: wearable devices for medical, educational, and as critical support systems, sensor networks and systems for monitoring and distributed control to address safety needs and environment management, smart-grid systems, and embedded wireless for entertainment systems. This has fuelled an exponential growth in Wireless devices: from about five billion devices to about hundred billion devices by 2025. All of these devices demand connectivity: to peers, to public systems, to private systems, to office systems, and to a lot more with vehicular networks and machine-machine networks. This sort of connectivity demand is accompanied (also as a consequence) by an excessive orders of magnitude increase in density.

One of the imperatives is that the spectrum, a (recognized and categorized) “scarce” natural and national resource be utilized efficiently. However, the prevailing apparent scarcity of spectrum is illustrated by multiple allocations of most bands by the regulating authorities and at the same time, studies have shown that about 70% of the allocated spectrum is not utilized. This discrepancy between allocation and utilization leads to an opportunistic exploitation of the spectrum.

Cognitive radios and cognitive radio networks are to targeted to address this situation in a scalable manner: cognitive radios enhances the utilization of spectrum and addresses the need to achieve efficient utilization of the radio spectrum in a fair manner; and cognitive radio networks are a complex, multiuser wireless communication systems that perceive the radio environment to learn and adapt the transceivers and facilitate cooperative communication among multiple users of the cognitive radio networks. Dynamic spectrum access provides an immediate opportunity for efficient spectrum usage and introduction of new services and this requires addressing of issues related to spectrum management and economics, spectrum co-existence algorithms and protocols, and enabling radio technologies. For example, a near-term issue is about how to exploit the so-called TV white space to build broadband wireless access networks. The important wireless technologies include WLAN, WiMAX, WPAN, wireless network for digital TV and radio broadcast, wireless 3G and beyond (LTE). Wireless 4G and beyond (LTE Advanced that is an evolution of LTE) is targeted to leverage advanced topology network (comprising femtocells, picocells, and relays) and provide significantly higher network capacity and at the same ensuring user fairness and improving cell edge performance.

The transition across the wireless generations is to advance from being operator-centric to service-centric and to being user-centric. Further, these transitions enable end-users to seamlessly connect using the best technology, help operators to deploy strategies effortlessly, effective configurability, and efficient resource (including most importantly, spectrum) utilization.

A key factor is heterogeneity – network heterogeneity (femtocells, picocells, macrocells) and access heterogeneity (Bluetooth, WLAN,WiMAX, 2G, 3G and Beyond). This sort of multi-faceted heterogeneity requires autonomicity to enable layered decision-making and at the same reducing the overall system complexity. Self-organizing networks allow for self-configuration, self-optimization, and self-healing features of autonomicity leading to seamless and assured connectivity without much intervention. Further, the self-organizing radio networks are capable of responding to environmental changes such as interference, device density, and end-user application requirements.

The area of wireless technology is rich with challenging and applied research issues: spectrum sensing and access algorithms and systems, the overall cognitive radio system and cognitive radio network architectures, wireless multi-networking, multi-layered protocol structures, cooperative systems, and self-adaptations and resource sharing.

Our research objective is to contribute towards a large scale heterogenous network leading to the realization of the Internet of Things.


Domain of Multimedia

Digital content has taken over an indispensible role in our lives. With large quantities of varied audio visual content being created by both organized and amateur sources, the challenge of providing right content to right consumer has acquired significance. Especially the audio content is almost all pervasive, starting from the metro radio stations to amateur music albums. Audio processing has come off the age in providing novel and beneficial audio technologies which have direct impact on user experience and in some cases on quality of life and security: Efficient techniques for search and retrieval of relevant content by users. Until recently the indexing and retrieval of music content largely rely on the meta data provided by the content creators.

Recent times have seen techniques such as query by humming and search based on sample audio clips. Given the size of the audio content database and the varied nature of the user requirements, these kinds of services call for a large scale automation of indexing and retrieval process and this involves intelligent processing of audio content at different levels to extract the semantics and perceptible characteristics.

Another important research direction in music related audio is music synthesis. Although the generation of a fully synthetic music is a distant dream, new musical instruments which can generate synthetic music based on critical inputs from musicians are increasingly being attempted. With ever increasing real time audio streaming services and internet radio stations, the challenge of efficient transmission of audio data within the constraints of bandwidth and quality requirements has become significant. Both academic and corporate research world over is actively contributing to audio research with an endeavor to make the world noise free, enhance user experience, and improve the quality of communication between individuals, and between individuals and machines. Apart from the entertainment industry, biometrics and security industries are also the major beneficiaries of the ongoing audio research.

One of the long cherished dreams of humans has been to replicate human visual capabilities in a machine. Invention of a camera was the first major step in this direction. With the development of computers followed by the revolution in digital content, focus of researchers turned more towards the perceptual aspects of human vision and efforts are directed to analyze and derive semantics out of digitized visual information.

The objectives of visual content analysis depend on the domain under consideration such as manufacturing and industrial automation, healthcare and disease management, security and surveillance, entertainment and marketing, transport and travel. Image Processing is efficiently used for identifying specific objects or their states, those in turn are used for high level decision making. Image processing is most effectively exploited in catering to ever increasing demands of viewers in entertainment domain, be it in terms of hi-tech special effects, in terms of superior quality of images with minimum storage and transmission overheads, or in terms of efficient search and retrieval systems.

One of the successful applications of image processing comes in the form of driver assistance by in-vehicle cameras that provide enhanced safety in the form of collision avoidance systems and lane change detection systems. The image-based applications present significant challenges due to varying lighting and other environmental conditions necessitating the use of multiple techniques such as pattern recognition, machine learning, psycho-visual analysis, and information theory. Current research focuses on advanced techniques that involve combined analysis of multiple visual sensors leading to actionable inferences.

Analysis of multisensory data provides greater opportunity for realizing the goals of computer vision community by integrating and analyzing multi-dimensional visual data to gain near three dimensional comprehensions on the objects and their states. Some of the applications of multi-sensor fusion are multi-camera tracking of objects in surveillance, remote sensing, and target recognition in war fields. The processing and understanding of multisensory data for integration is still a major challenge for computer vision community.

Majority of information available on World Wide Web, that is now the single, most referred repository of almost all the knowledge in the world, is textual and this necessitates the use of text processing techniques to build better web search engines. News available on the websites and the news servers are parsed to create news capsules to be recommended for individual and corporate users. Further, with the advent of social networking, blogging, and other online publishing mechanisms, the available unstructured textual data has grown exponentially, pressing the need for efficient and intelligent text processing mechanisms. The volume of the available data, as well as the inherent requirement of the users to easily correlate and understand this voluminous data have paved the way for the processing both textually (syntactically) and semantically.

Text is processed at various levels of depth and involves data gathering, preprocessing, mining, and processing to generate meaningful information for the users overcoming the challenges such as conversion between formats, removal of unnecessary information, and creating linkages among data elements based on similarity measures. Text processing also involves the usage of natural language processing techniques, statistical techniques, and visualizing of the available data as text (for example, Bio-informatics and Genetic engineering, such as the Human Genome project, make use of text processing algorithms to process biological data such as Nucleotide and Protein sequences for similarity detection).

Internal Security agencies use advanced text processing algorithms to analyze emails and other encrypted content. Semantic analysis of text involves analyzing the text from users’ perspective, servers’ perspective, and domains’ perspective and this combined and comprehensive analysis is important in analyzing the internet queries. The current standard of Web 2.0, through the easy to use UI, integrates the multiple sources of data including websites data to help create tag-clouds. It is required for any next generation Internet application to be “semantically charged” to bridge the gap between a user’s need and a system’s perspective using a domain (of the system) ontology for information mining, correlating and chaining, and to provide “semantically” accurate responses.

Our research objective is to address cataloguing and processing of big data containing multimedia content with a further objective to build solutions for enhancing enterprise productivity through multimedia analyses.


Domain of Next Generation Networks and Computational Systems

The internet has transformed itself into a virtual world with more than a billion users collaborating using a diverse set of services ranging from the old fashioned e-mail to eCommerce, eGovernance, and social networks. A closer look at the underlying technology that is currently supporting such services reveals the inherent need for technological alignment to the services hosted on it thereby leading to the concept of Next Generation Service-Oriented Networks. The need for extreme dynamism in Next Generation Networks demands every entity on the Internet to be flexible, adaptive, and cooperative. This leads to a service specific effective customization and quicker provisioning of the network to deliver services in a much more cost-conscious manner.

A core concern for current and future applications is the fact that it needs to be environment friendly. Current data centers have thousands of servers/routers that are always on but on an average only a small percentage of them are in use. Next generation networks need to enhance the utilization of powered-on entities in an intelligent manner. Another interesting feature, from applications perspective, is that the network should provide seamless connectivity so that the deployed services are "always on."

An essential ingredient is to exploit bleeding edge technologies such as Virtualization. Virtualization enables slicing of a single physical entity into multiple network entities. Each virtual entity can interconnect with other such entities to create mini-internets all through a single physical system. As the number of entities and services increase exponentially, there is a need for the network to be “semantically aware” to support open and easy integration. Next generation networks provide an excellent platform for the deployment of the next generation services that need to be highly user-friendly, ubiquitous, and adaptive in terms of network availability, location, and devices: The usage of XML as a message format for web services is a good example of brining openness in next generation networks.

Our research objective is to exploit virtualization techniques both at system and network level leading to software defined networks.