Domenico Tortorella received the BSc in computer engineering from the University of Salerno, Italy, in 2017, and the MSc in computer science from the University of Pisa, Italy, in 2020. Currently, he is a PhD student in computer science at the University of Pisa under the supervision of prof. Alessio Micheli, working within the Computational Intelligence and Machine Learning research group (CIML) at the Department of Computer Science.
Research topics
- Machine learning for graphs
- Reservoir computing
- Constructive neural networks
Publications
- D. Tortorella, A. Micheli (2023). Minimum Spanning Set Selection in Graph Kernels. 13th IAPR-TC15 International Workshop on Graph-Based Representations in Pattern Recognition (GbR 2023), pp. 15-24. doi:10.1007/978-3-031-42795-4_2 ⭐ Best Paper Award
Kernel-based learning models such as support vector machines (SVMs) can seamlessly deal with graph structures thanks to suitable kernel functions that compute a particular similarity between pairs of data samples. In the last two decades, a plethora of graph kernels have been proposed, motivated by theoretical properties or designed specifically for an application domain. Computing graph kernels however presents a significant cost for both training and inference, since predictions on unseen graphs require evaluating the kernel e.g. between the new sample and all support vectors, and solutions to an SVM optimization problem are not guaranteed to be sparse. In this paper, we present a method to select a minimum set of spanning vectors for the solutions of SVMs, based on the rank-revealing QR decomposition of the kernel Gram matrix. We apply it on 18 real-world classification tasks on chemical compounds, showing its effectiveness to reduce the computational burden in performing inference on unseen graphs by minimizing the number of support vectors without penalizing accuracy. This in turn gives us a tool to better analyze the trade-off between accuracy, expressiveness and inference cost among different graph kernels.
- A. Micheli, D. Tortorella (2023). Addressing Heterophily in Node Classification with Graph Echo State Networks. Neurocomputing, vol. 550, 126506. doi:10.1016/j.neucom.2023.126506
Node classification tasks on graphs are addressed via fully-trained deep message-passing models that learn a hierarchy of node representations via multiple aggregations of a node’s neighbourhood. While effective on graphs that exhibit a high ratio of intra-class edges, this approach poses challenges in the opposite case, i.e. heterophily, where nodes belonging to the same class are usually further apart. In graphs with a high degree of heterophily, the smoothed representations based on close neighbours computed by convolutional models are no longer effective. So far, architectural variations in message-passing models to reduce excessive smoothing or rewiring the input graph to improve longer-range message passing have been proposed. In this paper, we address the challenges of heterophilic graphs with Graph Echo State Network (GESN) for node classification. GESN is a reservoir computing model for graphs, where node embeddings are recursively computed by an untrained message-passing function. Our experiments show that reservoir models are able to achieve better or comparable accuracy with respect to most fully trained deep models that implement ad hoc variations in the architectural bias or perform rewiring as a preprocessing step on the input graph, with an improvement in terms of efficiency/accuracy trade-off. Furthermore, our analysis shows that GESN is able to effectively encode the structural relationships of a graph node, by showing a correlation between iterations of the recursive embedding function and the distribution of shortest paths in a graph.
- D. Tortorella, A. Micheli (2023). Is Rewiring Actually Helpful in Graph Neural Networks? arXiv:2305.19717
Graph neural networks compute node representations by performing multiple message-passing steps that consist in local aggregations of node features. Having deep models that can leverage longer-range interactions between nodes is hindered by the issues of over-smoothing and over-squashing. In particular, the latter is attributed to the graph topology which guides the message-passing, causing a node representation to become insensitive to information contained at distant nodes. Many graph rewiring methods have been proposed to remedy or mitigate this problem. However, properly evaluating the benefits of these methods is made difficult by the coupling of over-squashing with other issues strictly related to model training, such as vanishing gradients. Therefore, we propose an evaluation setting based on message-passing models that do not require training to compute node and graph representations. We perform a systematic experimental comparison on real-world node and graph classification tasks, showing that rewiring the underlying graph rarely does confer a practical benefit for message-passing.
- D. Tortorella, A. Micheli (2023). Richness of Node Embeddings in Graph Echo State Networks. Proceedings of the 31st European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2023), pp. 11-16. doi:10.14428/esann/2023.ES2023-51
Graph Echo State Networks (GESN) have recently proved effective in node classification tasks, showing particularly able to address the issue of heterophily. While previous literature has analyzed the design of reservoirs for sequence ESN and GESN for graph-level tasks, the factors that contribute to rich node embeddings are so far unexplored. In this paper we analyze the impact of different reservoir designs on node classification accuracy and on the quality of node embeddings computed by GESN using tools from the areas of information theory and numerical analysis. In particular, we propose an entropy measure for quantifying information in node embeddings.
- M. Pardi, D. Tortorella, A. Micheli (2023). Entropy Based Regularization Improves Performance in the Forward-Forward Algorithm. Proceedings of the 31st European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2023), pp. 393-398. doi:10.14428/esann/2023.ES2023-79
The forward-forward algorithm (FFA) is a recently proposed alternative to end-to-end backpropagation in deep neural networks. FFA builds networks greedily layer by layer, thus being of particular interest in applications where memory and computational constraints are important. In order to boost layers’ ability to transfer useful information to subsequent layers, in this paper we propose a novel regularization term for the layer-wise loss function that is based on Renyi’s quadratic entropy. Preliminary experiments show accuracy is generally significantly improved across all network architectures. In particular, smaller architectures become more effective in addressing our classification tasks compared to the original FFA.
- A. Ceni, P. F. Dominey, C. Gallicchio, A. Micheli, L. Pedrelli, D. Tortorella (2023). Continuously Deep Recurrent Neural Networks. ECML-PKDD 2023 Workshop “Deep Learning meets Neuromorphic Hardware” (non-archival contribution).
We introduce a Recurrent Neural Network model characterized by connectivity patterns inspired by local cortical connectivity in the brain. Leveraging on local connections between neurons, we are able to model the effective depth of the proposed model in a continuous fashion via a single hyperparameter that controls the extent of local connectivity. We frame the proposed neural model in the context of Reservoir Computing, and show empirically that its local connectivity degree enables diverse dynamical behaviors in terms of memorization capabilities.
- D. Tortorella, A. Micheli (2022). Leave Graphs Alone: Addressing Over-Squashing without Rewiring (Extended Abstract). Presented at the First Learning on Graphs Conference (LoG 2022), Virtual Event, December 9–12, 2022. openreview.net/forum?id=vEbUaN9Z2V8
Recent works have investigated the role of graph bottlenecks in preventing long-range information propagation in message-passing graph neural networks, causing the so-called ‘over-squashing’ phenomenon. As a remedy, graph rewiring mechanisms have been proposed as preprocessing steps. Graph Echo State Networks (GESNs) are a reservoir computing model for graphs, where node embeddings are recursively computed by an untrained message-passing function. In this paper, we show that GESNs can achieve a significantly better accuracy on six heterophilic node classification tasks without altering the graph connectivity, thus suggesting a different route for addressing the over-squashing problem.
- D. Tortorella, A. Micheli (2022). Beyond Homophily with Graph Echo State Networks. Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022), pp. 491-496. doi:10.14428/esann/2022.ES2022-58
Graph Echo State Networks (GESN) have already demonstrated their efficacy and efficiency in graph classification tasks. However, semi-supervised node classification brought out the problem of over-smoothing in end-to-end trained deep models, which causes a bias towards high homophily graphs. We evaluate for the first time GESN on node classification tasks with different degrees of homophily, analyzing also the impact of the reservoir radius. Our experiments show that reservoir models are able to achieve better or comparable accuracy with respect to fully trained deep models that implement ad hoc variations in the architectural bias, with a gain in terms of efficiency.
- D. Tortorella, C. Gallicchio, A. Micheli (2022). Hierarchical Dynamics in Deep Echo State Networks. Proceedings of the 31st International Conference on Artificial Neural Networks (ICANN 2022), pp. 668-679. doi:10.1007/978-3-031-15934-3_55
Reservoir computing (RC) is a popular approach to the efficient design of recurrent neural networks (RNNs), where the dynamical part of the model is initialized and left untrained. Deep echo state networks (ESNs) combined the deep learning approach with RC, by structuring the reservoir in multiple layers, and thus offering the striking advantage of encoding the input sequence on different time-scales. A key factor for the effectiveness of ESNs is the echo state property (ESP), which ensures the asymptotic stability of the reservoir dynamics. In this paper, we perform an in-depth theoretical analysis of asymptotic dynamics in Deep ESNs with different contractivity hierarchies, offering a more accurate sufficient condition of the ESP. We investigate how different hierarchies of contractivity affect memory capacity and predictive performance in regression tasks, concluding that structuring reservoir layers in decreasing contractivity is the best design choice. The results of this paper can potentially be applied also to the design of fully-trained RNNs.
- D. Tortorella, C. Gallicchio, A. Micheli (2022). Spectral Bounds for Graph Echo State Network Stability. Proceedings of the 2022 International Joint Conference on Neural Networks. doi:10.1109/IJCNN55064.2022.9892102
Graph echo state networks (GESN) are a class of reservoir computing models for the efficient and effective processing of graphs. They compute graph embeddings by the convergence to a fixed point of a dynamical system, randomly initialized according to a generalization of the echo state property, called the graph embedding stability (GES) property. In this paper, we prove new and more accurate bounds for necessary and sufficient GES conditions. Experiments demonstrate how these bounds allow an easier parameter selection and better quality reservoirs.
- A. Micheli, D. Tortorella (2022). Discrete-Time Dynamic Graph Echo State Networks. Neurocomputing, vol. 496, pp. 85-95. doi:10.1016/j.neucom.2022.05.001
Relations between entities evolving through discrete time-steps can be represented by discrete-time dynamic graphs. Examples include hourly interactions between social network users or daily infection spreading. In this paper, we present Dynamic Graph Echo State Network (DynGESN), a reservoir computing model for the efficient processing of discrete-time dynamic temporal graphs. We prove a sufficient condition for the echo state property, which ensures that graph embeddings are independent of initial conditions, and we briefly analyze reservoir dynamics. DynGESN is compared against temporal graph kernels (TGKs) on twelve graph classification tasks, and against ten different end-to-end trained temporal graph convolutional networks (TGNs) on four vertex regression tasks, since TGKs are limited to graph-level tasks. Compared to TGKs that need to hold the entire history of vertex interactions, our model provides a vector encoding for the dynamic graph that is updated at each time-step without requiring training. Experiments show that our model achieves accuracy in line with TGKs that have comparable computational complexity, while still offering space and time requirements better suited to scale to large-size data. Moreover, DynGESN overall achieves superior or on par accuracy with respect to TGNs, while improving efficiency by up ten times on inference and training time.
- D. Tortorella, A. Micheli (2021). Dynamic Graph Echo State Networks. Proceedings of the 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2021), pp. 99-104. doi:10.14428/esann/2021.ES2021-70
Dynamic temporal graphs represent evolving relations between entities, e.g. interactions between social network users or infection spreading. We propose an extension of graph echo state networks for the efficient processing of dynamic temporal graphs, with a sufficient condition for their echo state property, and an experimental analysis of reservoir layout impact. Compared to temporal graph kernels that need to hold the entire history of vertex interactions, our model provides a vector encoding for the dynamic graph that is updated at each time-step without requiring training. Experiments show accuracy comparable to approximate temporal graph kernels on twelve dissemination process classification tasks.
Teaching
- Teaching Assistant for the Machine Learning course, Master Degree in Computer Science (fall 2022) ↪
Membership
- Vice-Chair of the IEEE Student Branch of Pisa (2023-)
- Institute of Electrical and Electronics Engineers (IEEE), Student member since 2016
- Association for Computing Machinery (ACM), Student member since 2017
- European Neural Network Society (ENNS), Student member (2022)
- ICANN 2022, Program Committee member ↪
- ICANN 2023, Program Committee member ↪
- Temporal Graph Learning Workshop @ NeurIPS 2023, Program Committee member ↪
Contact
Room 298A
Department of Computer Science, University of Pisa
Largo B. Pontecorvo, 3, 56127 Pisa, Italy
ORCID 0000-0003-3910-7713
Google Scholar
Scopus
Web of Science
LinkedIn
Twitter