Vai al contenuto principale

Research projects 2019-20

In the following the list of projects offered for the next academic year is shown. For futher information contact the director of PhD program. 

Research projects for academic year 2019-20 (cycle 35 - second call - Regional Apprenticeship Program )

TITLE

TUTOR ABSTRACT
Automatic retail shelf management using visual information and AI Apprenticeship at Synesthesia S.r.l
Contact: Prof. Marco Grangetto
In recent years AI and in particular machine learning has boosted the capability of automatic recognition of objects from visual data such as still images and video content. The aim of the project is to research and develop a novel image analytics platform designed to provide insights into the physical shelves of a retail environment. The project has two key challenges: the first one is to develop algorithms for reliable detection, recognition and classification of objects including small objects as products can be also of relative small size and the second one is the development of fast training and retraining algorithms as the set of retail items needs to be updated relatively frequently in time. The system will be composed of an analytics platform, the image capturing devices and a reporting dashboard. Data acquisition is performed by a camera that can be mounted on top of any standard retail shelf or is the camera of a smartphone or tablet used by store personnel. A cloud-based machine learning and image processing algorithm will be developed to analyze the shelf images. the objective is to extract basic data such as identification of products and labels and enable the computation of a variety of parameters such as misplaced items, missing items from a specific shelf, statistics of the sale of specific items on a specific shelf and so on, so that the store personnel can take the necessary inventory control action.
Machine learning techniques and signal processing for predictive maintenance Apprenticeship at SPEA S.p.a.
Contact: Prof. Ruggero Pensa
Predictive maintenance reduces the operating costs of companies, that use and produce expensive equipment, predicting faults from data collected by sensors. The identification and extraction of useful information from data is a process that requires different iterations, a deep understanding of the machine and its operating conditions. The project aims to discover how combine machine learning techniques and signal processing techniques creating approaches to predict and isolate failures. These features can be used as health indicators to implement fault classification algorithms and to estimate the residual useful life.
Automated Optical Inspection for industrial applications Apprenticeship at SPEA S.p.a.
Contact: Prof. Davide Cavagnino
Automated Optical Inspection (AOI) more recently become an essential key in the test of electronics printed circuit boards. SPEA systems are designed to detect any possible defect in electronic products, so it wants to inspect new efficient algorithms for the detection of area defect, component polarity, component presence or absence, excessive or insufficient solder joints, heigth component measurment and analysis, presence of foreign material on board, shape recognition and classification.
Aggregate Computing for IOT Apprenticeship at Reply S.p.A
Contact: Prof. Ferruccio Damiani
The increase of the number of devices in the Internet of Things (IoT) and swarm robotics will soon make it infeasible to deploy a global-level software functionality by following the practice of individually programming every single device. Aggregate computing is an innovative approach for programming the IoT—the key idea is to simplify the software development by programming a large system as a whole: the desired system-level behavior is expressed by a declarative, mathematically and formally tractable global specification; then individual devices are automatically bounded to play the corresponding local behavior of that specification. In this project we will: design and implement API to support using the aggregate computing programming model in the context of mainstream IoT platforms, identify paradigmatic case studies, delineate the requirements that need to be satisfied by a solution to them, design aggregate computing algorithms which satisfies these requirements, and finally implement those algorithms in order to validate the approach experimentally on real systems.

 

Research projects (cycle 35 - first call - DEADLINE EXPIRED)

 

TITLE TUTOR ABSTRACT
Reti Complesse per le Scienze Sociali Computazionali/Complex Networks for Computational Social Science Giancarlo Ruffo Nowdays data scientists deals with the understanding and forecasting of real world phenonema adopting a data-driven approach to the analysis of complex systems. One of the most successful representations that allowed researchers to uncover non trivial properties from large data sets is the network (or graph) that have led to the breakdown of standard theoretical frameworks and models. In fact, a vast number of real world systems, from the socio-economic domain to power grids and the Internet, can be represented as large scale complex networks. Measuring structural characteristics and the dynamics of a network may lead to new level of undersanding of complex processes such as virality of information, diffusion, synchronization, and resilience. New computational and algorithmic challanges arise to deal with large datasets, and novel applications must be designed according to natural self organization of such complex systems.
Algoritmi di apprendimento automatico con garanzie di privacy in domini complessi/Privacy-preserving machine learning in complex domains Ruggero G. Pensa Today's systems produce a rapidly exploding amount of data, and the data further derives more data, forming a complex data propagation network. In such a scenario, citizens’ privacy is constantly put at risk, so that the need for data protection is at the very centre of the public debate. Hence, machine learning and data science algorithms cannot ignore privacy constraints. Privacy-preserving machine learning is the ability of learning from data without disclosing any private information of the users. Among all proposed approaches, differential privacy has gained popularity due to its effectiveness and robustness in many application domains. Many privacy-preserving algorithms have been proposed in structured domains (databases, tables, statistical data), however, preserving privacy in unstructured domains (e.g., free text, images, videos and so on), is a novel and challenging task. This project concerns the study, design, development and test of new privacy preserving machine learning algorithms as well as the design of machine learning methodologies for privacy risk assessment in complex structured and unstructured domains.
Modellazione, Verifica e Riuso di Sistemi / System Modelling, Verification and Reuse Ferruccio Damiani The main research goal of the MoVeRe (System Modelling, Verification and Reuse, http://di.unito.it/movere) group is to contribute to an effective seamless integration of Formal Methods into software and system development methodologies. The research interests of the group span from foundational aspects to tools for supporting rigorous engineering of industrial systems.  PhD projects on the following  topics are curently available:                                                                                                                                                                        * Computational models and languages, Domain specific languages;
* Concurrent, distributed, and mobile systems;
* Cyber-Physical Systems, Internet of Things, Smart Cities, Wireless Sensor Networks;
* Edge/Fog/Cloud Computing;
* Self-organization, Swarm intelligence;
* Static and dynamic analysis techniques;
* System evolution and dynamic software updates;
* Variability modeling and software product lines.
The reseach group has  well-established collaborations with research institutions based in Europe, Japan and USA.
Modelli di programmazione paralleli per applicazioni moderne: AI and BigData/Parallel programming models for modern applications: AI and BigData Marco Aldinucci The main challenge for parallel execution of algorithms is to expose sufficient concurrency in the first place. Interestingly, most if not all, mathematical models can be solved using a number of different algorithms — some of which expose much more concurrency than others. Traditionally, computational scientists have chosen algorithms according to desired mathematical properties or total computational complexity, but not concurrency. PhD projects are available aiming at wil looking at new algorithms and methods which are highly suited for parallel execution, also looking at novel ground-breaking methods such as data-centric approaches (e.g. bigdata analytics and machine learning).
Open Five: un progetto di ricerca di open science, con software open source, su robot open hardware, con open data, open culture/A reserach project conjugating open science with open software, on open hardware, using open data, for the open culture Rosa Meo Open Five: un progetto in cui coniugare la data science con l'open science (scienza riproducibile e verificabile), con strumenti software open source, su open hardware (Arduino), usando e rilasciando open data in un ciclo a beneficio pubblico, per una cultura open (senza pregiudizi e discriminazioni)/A research project conjugating open science with open software, on open hardware (Arduino), using open data, for the open culture (with no prejudices and discriminations).
Cyber-Physical Systems Enrico Bini Cyber-Physical Systems (CPS) are physical processes controlled by a pervasive network of embedded computers. In CPS, computation, communication, and control of the physical process are tightly coupled and constrained by the environment. Hence, the design of each dimension in isolation introduces cost-inefficient solutions. Examples of CPS are: smart grids for energy supply and demands, smart transportation in smart cities, autonomous vehicles. In this research proposal, the theoretical foundations of CPS will be investigated, in accordance with the background of the candidate.
Fondazioni Logiche dei modelli di computazione Luca Paolini I modelli di calolo digitali hanno delle solide basi teoriche che ne hanno decretatato il successo incontrastato a scapito di altri modelli come la computazione analogica. Tuttavia, i modelli di computazione alternativi oggi emergono in maniera sempre piu decisa: computazione quantistica, reversibile, biologica , ecc... Lo studio delle fondazioni teoriche di questoi modelli è una ricerca pionieristica ed affascinante che si richiede nozioni matematiche ad ampio spettro: logiche, categoriali, combinatoriali, probabilistiche, algebriche e non solo. 
Programming Languages Luca Padovani Modern software applications rely on concurrency, inter-process synchronization and communication. These constructs are challenging to reason about and lead to subtle programming mistakes that are difficult to track and to repair. As a result, the overall development process of these applications is more expensive and less productive.  PhD projects are available aimed at developing type-based theories, techniques and tools for programming concurrent applications with certified correctness properties such as communication safety, protocol conformance, data-race freedom and deadlock freedom. Particular emphasis will be given to the application of the developed techniques and tools to real-world functional and object-oriented languages.
Artificial Intelligence for Dependable and Critical Systems Luigi Portinale In order to model and analyse dependable and critical systems, knowledge about the system behavior, as well as the operating environment should be suitably exploited. AI techniques, especially those based on Probabilistic Graphical Models for reasoning under uncertainty, can be really useful in such a context. PhD projects are available aimed at studying new formalisms  based on the PGM paradigm and on Machine Learning for applications in FDIR (Fault Detection Identification and Recovery), cyber-security, predictive maintenance, IoT.
Model checking quantitativo/Quantitative Model Checking Jeremy Sproston Model checking is a system verification technique for establishing whether a model of a system satisfies a number of formally-specified correctness properties. Model checking has been extended so that quantitative information, for example regarding continuous, timed or probabilistic behavior, can be represented both in system models and in the correctness properties. The project concerns the development of model-checking techniques for quantitative aspects of embedded software and cyber-physical systems in general, with particular focus on timed and probabilistic issues
Mining, retrieval e analisi di processi di business /Mining, retrieval and analysis of business process models Stefania Montani Process mining techniques to learn process models from business process traces; definition of proper similarity metrics for business process models; exploitation of these metrics within proper retrieval and ordering algorithms to support process analysis; comparison of process traces through deep learning techniques; testing in real world domains (e.g. stroke management; optimization of process task scheduling in a cloud computing environment).
Sistemi avanzati di Ontology Learning e Open Information Extraction basati su tecniche di Natural Language Processing, Machine Learning ed integrazione di risorse semantiche / Advanced Ontology Learning and Open Information Extraction systems based on Natural Language Processing, Machine Learning and integration of semantic resources Luigi Di Caro Ontology Learning and Open Information Extraction were born to solve the problems of extracting semantic knowledge from large corpora in automated ways. The project involves tasks such as 1) the advancement of semantic extraction methods, 2) the creation of multilingual datasets and resources, 3) the application of Machine (and Deep) Learning algorithms in this context, and 4) the development of end-user applications such as chatbots.
Reti neurali cognitivamente plausibili/Cognitively plausible neural networks Valentina Gliozzi After the recent explosion of research in neural networks (deep and non deep), it has become clear that these models behave differently than humans in a variety of domains including visual categorization, multimodal learning and language learning. Is it possible to make the existing models operating in these domains more cognitively plausible whence more understandable and reliable? 
 Tecniche Avanzate di Intelligenza Artificiale e di Basi di Dati
Temporali in Medicina: Teoria ed Applicazioni //
Advanced Artificial Intelligence and Temporal Database Techniques in Healthcare: Theory and
Applications
Paolo Terenziani The project is part of a long-term cooperation started in 1997 with ASU San Giovanni Battista in Turin (one of the major hospitals in Italy), and aims at developing intelligent computer-based techniques to support physicians in decision making and in the treatment of patients through clinical guidelines. The overall project, known as “GLARE” (Guideline Acquisition, Representation and Execution),  is a collector of many different sub-projects, each one typically involving the development and\or application of advanced Artificial Intelligence techniques and methodologies. Typical areas of interest include knowledge acquisition, knowledge representation, knowledge-based verification, conformance analysis, temporal reasoning. Other sub-projects focus on  the development and application of advanced techniques to cope with time in relational DBs, to manage the temporal dimension of patients’ data. Applications include (support to) medical decision making, medical education, treatment of patients with multiple diseases.
Accountability computazionale/Computational accountability Matteo Baldoni, Cristina Baroglio, Roberto Micalizio Business processes represent an important tool for rationalizing business resources and cross-business relationships. Business processes are mainly based on a procedural and prescriptive representation of activities. Their main disadvantages are: the concerned parties cannot take advantage from occasions nor adapt to adverse situations; there is no representation of accountability; they are not suitable to realize socio-technical systems. The proposed research aims at studying social relationships as an alternative for realizing business processes. Social relationships capture dependencies among the actors, capture in a natural way a notion of accountability, and they evolve exclusively after the observation of social behaviour. Due to their declarative nature they are suitable to describe business processes in a minimally descriptive way.
Interazione e coordinazione di sistemi multiagente basata su relazioni sociali /Interaction and coordination based on social relationships for Multiagent Systems Matteo Baldoni, Prof. Cristina Baroglio Commitments and, more generally, social relationships, are the emerging paradigm for modeling interactions and coordination in multiagent systems. This research will investigate the use of social relationships both form a theoretical and practical point of view. A special attention will be posed on data-awareness and norm-awareness.
Crowdmapping/Crowdmapping prof. Guido Boella Examining the impact of IT on the evolution of participatory cartography into crowdmapping. Possible focuses are: georeference by means of GPS (outdoor maps) or RFID (indoor mapping) in order to collect environment and user data, integration of map semiotics with user interaction principles, analysis of the social aspect by means of social networks and incentive mechanisms, cognitive (and possibly ontology-based) models for map-based knowledge representation, analysis of the temporal dimension for the map-based representation of fluents. Projects: FirstLife
Applicazioni di blockchain e smart contract/Blockchain applications and smart contracts prof. Claudio Schifanella The success of Bitcoin chryptocurrency has shown the potentialities of the distributed ledger technologies such as the blockchain. This project aims at studying these technologies and in particular of smart contracts. This project will benefit from the connection with the Co-city European project, where a blockchain for social purposes is developed
Linguaggio e mappe dell'odio in rete/Language and maps of hate speech online prof. Viviana Patti Data streaming from social media is a powerful source to detect trends and opinion shifts among citizens and also a testbed to validate social science theories. In particular, as the danger of social media as a breeding ground for online hate speech increases, it grows hand in hand the interest in developing artificial intelligence tools to detect and analyze hate speech, with the main aim of monitoring and understanding the hate dynamics in a territory. Hate speech is considered today a phenomenon to be studied with particular attention, because of the availability of user generated contents and the implications in the digital society, where the interplay between opinions formed and expressed on line and actions that individuals may decide to adopt on and off line can quickly escalate. The project will focus not only on the study of automatic tools to analyze and to detect hate speech in language, but also on the development of visualizations methodologies for hate maps, where the use of different spatial and time aggregations can provide an effective tool to visually explore the complexity of the phenomenon at different scales.
Explainable inferences in Artificial Intelligence Sytsems prof. Luca Console The adoption of artificial intelligence systems and smart objects in everyday life requires that these systems can interact with people and explain the solutions they propose and the inference they made to compute the solutions. This is a well known problem in artificial intelligence since the 80's but is becoming critical nowadays. The project will consider various tasks carried on by Ai systems, focusing on the reasoning patters behind such tasks.
Computer vision and deep learning for  multi-dimensional imaging/ Visione artificiale e deep learning per immagini multi-dimensionali prof. Marco Grangetto The way we capture visual information from reality is changing significantly as witnessed by technological development of imaging sensors, ranging from stereoscopic/spherical to depth sensors, and light-field cameras. On the one hand, the imaging sensors can support novel display technologies, e.g. HDR, auto-stereoscopic, holographic and immersive head-mounted displays.  On the other hand, new multidimensional images (including RGB, infra-red, depth, 360-degrees), coupled with the advance of computer vision based on deep learning models pave the way to unparalleled opportunities for understanding and modeling of visual information. This project will be focused on advancing the state of the art in the area of computer vision with potential impact in many application areas ranging from biomedical image processing, to 3D rendering and augmented reality. http://di.unito.it/eidoslab
Conversational Interfaces and Natural Language Generation for Artificial Intelligence / Interfacce conversazionali e generazione automatica del linguaggio naturale per l'intelligenza artificiale Alessandro Mazzei The natural language is the most sophisticated technique that machines can use in order to communicate with humans. Three fields of artificial intelligence that can be productively coupled with Conversational Interfaces (CI) and Natural Language Generation (NLG) are (1) Automatic Reasoning explanation, (2) Social Robot Interaction, (3) Assistive Technologies. Therefore, the project will consist in the study, the design and the application of CI and/or NLG to one of these fields.
Cognitive Architectures for Virtual Agents Antonio Lieto, Rossana Damiano Creating artificial agents able to exhibit human-like reasoning and behaviour in virtual environments is one of the main challenges in cognitive AI and impacts on applicative areas such as video-games (including educational and serious games) and persuasive technologies.
The goal of this project is to study and develop a well-founded computational cognitive model of human behaviour that leverages reasonging heuristics and narrative knowledge to build human-like characters to be tested and deployed in a serious game setting. 
Computational approaches and reources for stance and stereotype detection  Cristina Bosco Recent advacements in sentiment analysis and related tasks, e.g. stance detection or hate speech detection, raise several new issues for the researchers, as highlighted by the evaluation campaigns for this area. For addressing them novel approaches must be developed by carefully taking into account different text genres (social media, newspapers), targets (immigrants, women or LGTB community members) and facets (stereotype, irony, metaphor), but also the influence of biases in the annotation, processing and analysis of data. The PhD project will be focused on the study and development of tools and resources that implement this kind of approaches. 
Knowledge representation and plausible reasoning  in AI/ Rappresentazione della conoscenza e plausible reasoning in AI Laura Giordano, Daniele Theseider Dupré Reasoning about a changing world with incomplete knowledge rises interesting challenges to knowledge representation and reasoning in AI. Examples span from the problem of inheritance with exceptions in web ontologies, to reasoning on business processes executing in a complex environment or the execution of clinical guidelines.
Phd thesis projects are available on such topics, exploiting modal, temporal, conditional, description logics, answer set programming for defeasible and plausible reasoning, for modeling belief change and revision, for reasoning about actions, for reasoning about ontologies. Another topic concerns the relationships between description logics and set theory.
Sistemi intelligenti  basati su modellazione dell'utente per soggetti fragili / Intelligent systems for fragile people Cena Federica Intelligent systems based on user model are systems that create a digital representation of the user and use this model to adapt the content or interface. Thanks to the advent of pervasive computing, it is possible to create a holistic representation of the user who brings together: i) data from the user's behavior on real life that can be collected from wearable device and environmental intelligences; ii) data from the behavior of users of networks and social networks, in order to deduce complex user characteristics (habits, cognitive abilities, emotional states). This model is particularly useful for providing adaptive support to vulnerable people (ie people with cognitive disabilities, such as people with autism, people with dementia, etc.): In fact, for these people it is necessary to provide the right advice and avoid giving wrong suggestions, which can cause stress and anxiety, with unpredictable consequences.
The project is about the design and development of an intelligent system able to proactive support cognitive-impaired people. 
Bioinformatics work-flow for the reproducible analysis of massive sequencing data./Work-flow bioinformatici  per analisi riproducibile di dati di sequenziamento massivi Marco Beccuti, Francesca Cordero Processing and analysis of genomic/transciptopm data is an example of a big data application requiring complex interdependent data analysis workflows. Such bioinformatics workflows typically requires several computationally-intensive processing steps using different software packages, where some of the outputs form inputs for other steps. In this context implementing scalable, reproducible, portable and easy-to-use work-flows is particularly challenging.
Therefore, this project concerns  study, design, development and test of new bioinformatics methodologies, approaches and algorithm to deal with such  a challenge.
Interfacce utente intelligenti / Intelligent user Interfaces Cristina Gena Intelligent User Interfaces (IUI) aims at improving the symbiosis between humans and computers by merging two research fields: Artificial Intelligence (AI) and Human-Computer Interaction (HCI). This may involve including intelligent capabilities in the interface in order to improve performance, usability and experience in critical ways. We are currently putting a lot of research efforts in two fields belonging to IUI: i) Human Robot Interaction (HRI) which  is a field of study dedicated to understanding, designing, and evaluating robotic systems for use by, or with, humans. Our research focus is on social and educational robots; ii) Brain Computer Interfaces (BCI), which are interfaces that put the user in communication with an electronic device through the brain activity produced by the user herself. Non-invasive BCI are mainly based on electroencephalographic (EEG) signals. In these systems, users are able to manipulate their brain activity to produce signals that will then be used to control computers or communication devices without the aid of motor movement.Therefore, this project concerns  study, design, development and test of a research line focusing either on HRI or BCI, or, even better on HRI and BCI joinlty. 
Modeling, simulation and analysis of the propagation of ideas over social networks Rossano Gaeta, Michele Garetto The research project aims at the definition of simple mathematical models to describe and analyze the generation and diffusion of beliefs, opinions, and the like over social media (e.g., Facebook, Twitter, etc), with particular emphasis on the proliferation of low-quality information (fake news). The activity will be carried out using available data sets, computer simulations, and mathematical analysis.
Elaborazione di immagini biomediche/Biomedical image processing Davide Cavagnino, Maurizio Lucenteforte The availability of huge amount of digital biomedical imaging data is significantly increasing the possibility to use processing techniques, pattern recognition and computer vision to support medical diagnosis in many fields. The project will develop fundamental research in the field of biomedical image processing in cooperation with medical units of our university and  other partners with particular interest in RX, PET images and histological microscopic images.
SafeML: Machine learning for critical system we can rely upon / Intelligenza artificiale affidabile per la costruzinoe di sistemi critici sicuri Marco Botta and  Susanna Donatelli This PhD project will address the new and important problem of assuring and assessing the safety of critical systems which incorporate machine learning components. In fact, Machine learning (ML) components are increasingly adopted in many safety critical domains (e.g., medical, aviation, automotive) due to their ability to learn and work with novel input/incomplete knowledge and their generalization capabilities.
Unfortunately, the existing standards or Verification and Validation (V&V) techniques used to ensure safety do not address the special characteristics of ML-based software such as non-determinism, non-transparency, unstabilty. To this end the goal of the project is to pursue “safe ML” meaning “safe use of ML in safety critical systems”. These objectives will be reached through i) investigating and establishing a sound conceptual model describing how safety concepts must be adapted/modified to cope with ML,  ii) developing architectural principles, design guidelines and architectural solutions for assuring and assessing the safety of critical systems incorporating ML-based components and iii) investigating recent proposals that advocates the inclusion of safety constraints in neural-networks
Ottimizzazione nei sistemi sanitari / Operational Research Applied to Health Services Roberto Aringhieri e Andrea Grosso Among the many fields where operational research and computers science meet, health care is surely one of the more vital nowadays. Health care is a very relevant research topic also for the impact on public opinion and for fuelling large discussions and debates. The most challenging aspect in health care stems from the high complexity of the system itself, its intrinsic uncertainty and its dynamic nature. This project will be focused on advancing the state of the art in the area of operational research methodologies applied to health services. Possible topics are the large and complex field of planning and scheduling health care activities, resources and personnel, and the management of the emergency medical service, emergency department, and humanitarian logistics.
Apprendimento per ottimizzazione / Learning for optimization Roberto Aringhieri e Andrea Grosso The power of modern metaheuristic algorithms to generate better solutions to complex optimization problems is based on the link between algorithmic effectiveness and the combined effects of intensification, diversification and learning. This proposal aims at exploring the introduction of machine learning tools in the development of heuristic and metaheuristic algorithms for hard optimization problems arising in fields like scheduling, rostering and timetabling, and/or nonlinear optimization.
Lexical resources for the semantic analysis of natural language Daniele Radicioni This doctoral project is aimed at providing artificial systems with human-level competence in understanding text documents by leveraging the notion of common-sense, which is typically missing from existing resources. This project requires and promotes a multidisciplinary perspective, including Computer Science, Cognitive Science, Cognitive Psychology and Neuroscience. Doctoral candidates will collaborate in carrying on research on lexical semantics, as described at http://ls.di.unito.it. Along with lexical resources and theoretical models of language semantics, applications will be drawn according to the interest of candidates, to tackle, e.g., figurative uses of language (such as metaphor and metonymy recognition), fake detection in social media, open domain question answering, knowledge graphs induction, events detection and extraction.  
Leveraging Big Data Analysis to understand emerging phenomena in complex systems Maria Luisa Sapino Big data analysis is increasingly critical for understanding spatio-temporal dynamics of emerging phenomena. The key characteristics of data sets and models relevant to big data analysis of complex events often include the following: (a) noisy, (b) multi-variate, (c) multi-resolution, (d) spatio-temporal, and (e) inter-dependent. Because of the volume and complexity of the data, the varying spatial and temporal scales at which relevant observations are made, today domain experts lack the means to adequately and systematically interpret these observations and understand the underlying events and processes. We will address  computational challenges that arise from the need to process, index, search, and analyze, in a scalable manner, large volumes of multivariate data.
Logics for Computational Creativity Viviana Bono, Antonio Lieto, Gian Luca Pozzato Inventing novel concepts by combining the typical knowledge of pre-existing ones is one the most creative cognitive abilities exhibited by humans. Dealing with this problem requires, from an AI perspective, the harmonization of two conflicting requirements that are hardly accommodated in symbolic systems: the need of a syntactic compositionality (typical of logical systems) and that one concerning the exhibition of typicality effects. The main aim of this project is to provide a logical framework able to account for this type of human-like and human-level concept combination based on a nonmonotonic Description Logic with a probabilistic semantics.
The applications of such a logical framework may impact on a wide variety of fields in the areas of creative industry, ranging from the creation of new characters in the movie industry (where, for example, the creation of a new protagonist for a cartoon can be obtained by combining some typical features from previous characters in a novel, surprising, way) to the automatic generation of novel and cognitively-plausible landscape scenarios, narrative story-lines in videogames, or the design of innovative products in fast-moving domains, such as fashion. The proposed logical framework can be generally thought as a problem solving heuristic applied by artificial agents (including robots) in any situation where there is the need of inventing a novel solution by leveraging from a repertoire of pre-existing knowledge to be recombined. We also would like to start to investigate possible relationships and mutual influences of other AI techniques exploited in similar settings, in particular, Machine Learning.
Logiche descrittive preferenziali per la revisione di ontologie /Preferential Description Logics for Ontology Revision Roberto Micalizio, Gian Luca Pozzato The main goal of this project is to develop a methodology to revise a Description Logic knowledge base when detecting exceptions. The starting point of this approach relies on the methodology for debugging a Description Logic terminology, addressing the problem of diagnosing incoherent ontologies by identifying a minimal subset of axioms responsible for an inconsistency. Once the source of the inconsistency has been localized, the identified axioms are revised in order to obtain a consistent knowledge base including the detected exception. To this aim, we intend to adopt a nonmonotonic extension of the Description Logics based on the combination of a typicality operator and the well established nonmonotonic mechanism of rational closure, which allows to deal with prototypical properties and defeasible inheritance.
Data Science for Social Good Rossano Schifanella New technologies have made the collection and the analysis of data - by governments, private companies, or innovative researchers - possible, making available large-scale collections of digital traces of human behaviour at an unprecedented breadth, depth and scale. This project aims at blending machine learning, spatial analysis, network science, text+image analysis, and data visualization, to model human dynamics at scale and to tackle real-world societal challenges. In this context, three main sub-themes are suggested: (1) The New Science of Cities: social media feeds, transit cameras, mobile phones, mapping services and sensors, provide a real-time picture of how cities work and enable the study of a wide range of issues affecting the everyday lives of citizens. This theme aims at modeling multi-modal mobility and human-centered approaches to explore the city, liveability and sustainability of urban areas, the relation between space, social life and well-being. (2) Computational Social Science: this theme aims at studying socio-cultural phenomena through large-scale digital datasets and social media platforms, and at proposing innovative applications to issues of societal interest, e.g., spreading of misinformation, bias towards minorities, migrations. It involves projects in the field of social innovation, philanthropy, international development and humanitarian action. Particular attention will be drawn on important aspects like data ethics, privacy and algorithmic transparency. (3) Digital Health: digital technologies have huge potential to improve health and wellbeing of citizens, enabling new approaches for patients, clinicians and researchers to manage healthcare more effectively. This theme focuses on the development of mathematical and computational tools to model social, cultural, and economic determinants that affect health. Particular focus will be drawn on the relation between eating habits, drugs consumption and daily habits.
Semantic Technologies and sentiment analysis to enhance the value of cultural heritage Anna Goy, Rossana Damiano, Viviana Patti The project can be seen as part of the inter-disciplinary research area of Digital Humanities, and aims at studying, designing, and developing ICT-based solutions for cultural resources management, with the objective of making them accessible and usable in innovative ways by a large audience with different skills and interests. In order to achieve this goal, the project will investigate the integration of two perspectives: (a) Using semantic technologies (ontologies, Semantic Web resources, Linked Open Data) to represent the content of heterogeneous cultural resources (textual documents, pictures, videos, paints, sculptures, monuments, etc.). The resulting conceptual semantic model should enable the identification of people, places, events and relationships, thus allowing users to "discover" alternative and original access paths to cultural heritage. In particular, narrative paths plays a major role to support a direct and friendly communication with users. Special attention will be payed to the possibilities offered by crowdsourcing models, in order to collaboratively build semantic representations of the domain and of itsinterpretation. (b) Using sentiment analysis tools. Such tools have proven useful to extract opinions and mood from a range of textual data in different domains, such as media marketing, political and social analysis, etc. Aspects related to sentiment are also linked to aesthetic experience, as acknowledged by philosophical and psychological theories along centuries. Through social media, linguistic feedback about artworks or other types of cultural resources becomes available, thus the sentiment induced by such resources can be investigated. The goal of the PhD project is to investigate in a systematic way the relationship between sentiment, emotions, and content semantic description, in the field of cultural heritage, being artworks or archival resources. The research is expected to deliver a model of the relation between cultural resources and sentiment (or mood), by identifying domain-specific categories for describing sentiment in cultural heritage and its sub-domains. Coupling sentiment analysis and content semantic description in cultural heritage paves the way to innovative applications, that range from sentiment mood-based content recommendation, to personalized tools for the exploration of historical and art archives.
Last update: 23/09/2019 09:49
Location: https://dott-informatica.campusnet.unito.it/robots.html
Non cliccare qui!