AI-Empowered Metaverse Streaming

Due to the advancements in emerging technologies such as extended reality, artificial intelligence, blockchain, 5G, cloud/edge computing, etc., the metaverse has attracted a great deal of attention from both industry and academia. Through the extended reality devices, people can take advantage of the virtual world provided by the metaverse to play, work, and socialize. Metaverse streaming is a promising solution that performs complex rendering tasks at the edge server and streams the resulting video sequence back to the users over the network. We will discuss metaverse streaming techniques and the potential usage of ML to dynamically choose the appropriate bit rates for multiple users with a shared bottleneck network. We will show preliminary results and discuss open questions and possible R&D venues.


PROF. MONCEF GABBOUJ (Plenary Speaker)

The Super Neuron Model – A new generation of ANN-based Machine Learning and Applications

Operational Neural Networks (ONNs) are new generation network models targeting to address two major drawbacks of conventional Convolutional Neural Networks (CNNs): the homogenous network configuration and the “linear” neuron model that can only perform linear transformations over previous layer outputs. ONNs can perform any linear or non-linear transformation with a proper combination of “nodal” and “pool” operators. This is a great leap towards expanding the neuron’s learning capacity in CNNs, which thus far required the use of a single nodal operator for all synaptic connections for each neuron. This restriction has recently been lifted by introducing a superior neuron called the “generative neuron” where each nodal operator can be customized during the training in order to maximize learning. As a result, the network is able to self-organize the nodal operators of its neurons’ connections. Self-Organized ONNs (Self-ONNs) equipped with superior generative neurons can achieve diversity even with a compact configuration. We shall explore several signal processing applications of neural network models equipped with the superior neuron.

PROF. AHMED CHEMORI (Plenary Speaker)

Recent Advances in Motion Control of Parallel Robots for High-Speed Industrial Applications

Serial robotic manipulators consist of a set of sequentially connected links, forming an open kinematic chain. These robots are mainly characterized by their large workspace and their high dexterity. However, despite these advantages, in order to perform tasks requiring high speeds/accelerations and/or high precision; such robots are not always recommended because of their lack of stiffness and accuracy. Indeed, parallel kinematic manipulators (PKMs) are more suitable for such tasks. The main idea of their mechanical structure consists in using at least two kinematic chains linking the fixed base to the travelling plate, where each of these chains contains at least one actuator. This may allow a good distribution of the load between the chains. PKMs have important advantages with respect to their serial counterparts in terms of stiffness, speed, accuracy and payload. However, these robots are characterized by their high nonlinear dynamics, kinematic redundancy, uncertainties, actuation redundancy, singularities, etc. Besides, when interested in high-speed robotized repetitive tasks, such as food packaging and waste sorting applications, the key idea lies in looking for short cycle times. This means obviously to look for short motion and short stabilization times while guaranteeing the robustness and performance with respect to disturbances and changes/uncertainties in the operational conditions. Consequently, if we are interested in control of such robots, all these issues should be taken into account, which makes it a bit challenging task.
This talk will give an overview of some proposed advanced control solutions for high-speed industrial applications of PKMs in food packaging, waste sorting, and machining tasks. The proposed solutions are mainly borrowed from nonlinear robust and adaptive control techniques and have been validated through real-time experiments on different PKM prototypes.


Integrating technologies to develop smarter implants and improved health care

Biomaterials have evolved from inert to biodegradable, bioactive and multifunctional. With advances in innovative processing techniques, it was possible to build strong yet biodegradable multifunctional implants that made an impact on clinical care of many patients worldwide. Advances in tissue engineering made it possible to accommodate cells in biodegradable scaffolds and develop living implants. To mimic tissue structure, nanofiber-based constructs were then developed. With the advent of three-dimensional (3D) bioprinting, cell containing bioinks were developed and control over cell distribution in engineered tissue constructs was achieved. To further leverage the advantages biodegradable materials offer, biodegradable sensors were developed to allow temporary monitoring of certain functions and parameters in the body. Further, it was possible to develop sensor-integrating implants that can sense changes in their microenvironment before these changes evolve into irreversible problems that lead to implant failure and necessitate surgical removal. There are already major developments that include developing electroconductive, self-healing and four-dimensional (4D) biomaterials. In future, combined approaches and technologies merge will enable the development of implants with self-awareness, actuation, self-correction/healing, and behavior mimicking that of native tissues.

PROF. MOHSEN GUIZANI (Plenary Speaker)

Smart City Applications with Pervasive AI

Abstract: Internet of Things (IoT) systems have expanded the role of Artificial intelligence (AI) in many applications. On the other hand, AI has witnessed a substantial usage in different IoT applications and services, spanning the smart city systems and speech processing applications to robotics control and military surveillance. This is driven by the easier access to sensed data and the enormous scale of pervasive/ubiquitous devices that generate zettabytes (ZB) of real-time data streams. Designing accurate models using such data streams to predict future insights and revolutionize the decision-taking process, inaugurates pervasive AI systems as a worthy paradigm to achieve better predictions which can lead to a better quality-of-life. The confluence of pervasive computing and artificial intelligence (Pervasive AI) expanded the role of ubiquitous IoT systems from mainly data collection to executing distributed computations with a promising alternative to centralized learning, presenting various challenges, including privacy concerns, scalability, and latency requirements. In this context, a wise cooperation and resource scheduling should be envisaged for a smart city using IoT devices (e.g., smartphones, smart healthcare, and smart vehicles) and infrastructure (e.g., edge nodes and base stations) to avoid communication and computation overheads and ensure maximum performance and efficient accuracy.

In this talk, a quick review of the recent techniques and strategies developed to overcome these resource challenges in pervasive AI systems will be given. Specifically, a description of pervasive computing, its architecture, and its intersection with artificial intelligence is presented. Then, we review the background, applications, and performance metrics of AI, particularly Federated Learning (FL), running in a ubiquitous system. Next, we present some communication-efficient techniques of distributed inference, training and learning tasks across a plethora of IoT devices, edge devices and cloud servers. Finally, we discuss future directions in this area and provide some research challenges.

PROF. KAAN OZBAY (Plenary Speaker)

Traffic Control for Efficient and Safe Transportation Systems in the Era of Connected and Automated Vehicles 

In the last several years,  there have been number of novel approaches to improve traffic operations and safety especially in highly congested urban areas.  Most of the innovation in these research and deployment efforts are fueled by the advances in connected & autonomous vehicles (CAV) as well as ubiquitous mobile devices, sensors and cameras deployed throughout these urban areas. In this talk, we will first provide a high-level state of the traffic control research and deployment as a result of advances in connected and automated vehicles as well as V2X communication technologies.  We will then discuss research on pro-active traffic management and control approaches with a focus on operations and safety in tandem.  A case study from the recently completed NY City connected vehicle pilot test where NY University C2SMART researchers participated as one of the University partners will be presented to describe the “soft control” approach in terms of in-vehicle warnings given to drivers. Second, the role and importance of a real-world cyber-physical test bed in these research efforts will be discussed using a learning-based headway control algorithm for transit buses tested in microscopic simulation. The talk will be concluded with a discussion of opportunities and challenges in traffic control research, development, and deployment in complex urban environments such as NYC and Washington D.C.


Multimodal Emotion Recognitions: Needs, challenges, and Opportunities

Emotion plays an important role in diverse real-life applications. Applications include: Video gaming, Medical diagnosis, Education, Employee safety, Patient care, Autonomous car, among others.  The emotion of a person can be identified by various sources of information like speech, transcript, facial expression, brain signal (EEG), and a combination of two or more of these signals. Among these sources, speech is seen at the most common attribute and the easiest to acquire, and use. Speech attributes are not substantially affected by side information such as physical movement, visual occlusion, beard, etc. Moreover, speech features for emotion recognition are quite invariant to language. However, recent research efforts have shown that the accuracy of speech based emotion recognition systems can still be enhanced using visual cues such as facial expressions. With the advances made in computing power and the availability of large amounts of data, it is becoming now possible to combine and analysis huge amounts of data using advanced neural networks such as deep networks. In this presentation, we will discuss the fundamental concept of emotion recognition. We will then analysis current research using speech and facial expression separately, then we move to more recent multimodal emotion recognition systems. Finally, we will provide a perspective on future research directions in this area with some major challenges and potential applications in this era of multimedia and smart living.



Semantic Segmentation using Deep Learning and its Applications

Semantic scene segmentation is a challenging problem that has great importance in many applications, including assistive and autonomous navigation systems. Such vision systems must cope with image distortions, lighting variations, changing surfaces, and varying illumination conditions. In this talk, we will present deep learning-based vision systems for fast and accurate object segmentation and scene parsing. Furthermore, the talk will present a hybrid deep learning approach for semantic segmentation. The new architecture combines Bayesian learning with deep Gabor convolutional neural networks (GCNNs) to perform semantic segmentation of unstructured scenes. In this approach, the Gabor filter parameters are modeled as normal distributions with mean and variance that are learned using variational Bayesian inference. The resulting network has a compact architecture with smaller number of trainable parameters, which helps mitigate the overfitting problem.






Wearable Brain-Computer Interfaces for measuring mental states: After data, are we loosing also thoughts privacy?

In the last two decades, Brain-Computer Interface (BCI) has gained great interest in the technical-scientific community, and more and more effort has been done to overcome its limitations in daily use. In Industry 4.0 framework, human becomes part of a highly-composite automated system and new-generation user interfaces, integrating cognitive, sensorial, and motor skills, are designed. Humans can send messages or decisions to the automation system through BCI by intentional modulation of brain waves. However, through the same signal, the system (and, hence, also the human being part of it) acquires information on the user status.

In this talk, most interesting results of this technological research effort, as well as its further most recent developments, are reviewed. In particular, after a short survey on research at University of Naples Federico II also in cooperation with CERN, the presentation focuses mainly on state-of-the-art research on wearable measurement systems for acting robots and monitoring mental states (emotions, engagement, distraction, stress and so on). Tens of disparate case studies, carried out by Federico II researchers, spacing from children autism rehabilitation to robotic inspection in hazardous sites, are reported. Special attention is given also to ethic and law issues arising from daily use, by leaving puzzling questions to attendees.