My PhD in Deep Learning for Communications: 5 Years in Review
A look back at my PhD journey, from Udine to Klagenfurt to Princeton and NeurIPS, and the moments that shaped it.

In early 2019, I embarked on an exciting new chapter of my academic career: my PhD in Information and Communications Engineering at the University of Klagenfurt.
My journey actually began in Udine, where I completed my master's in Telecommunications Engineering at the University of Udine, on a scholarship from the Scuola Superiore. During my master's, I had the opportunity to conduct groundbreaking research at Emory University in Atlanta, focusing on compressive sensing for biomedical signals.
As I transitioned into the PhD phase, I considered various paths, including opportunities at ETH Zurich. Ultimately, I chose Klagenfurt for its unique blend of deep learning and telecommunications, allowing me the freedom to explore innovative intersections between these fields.
This blog post will briefly take you through my PhD journey, the pivotal moments, and the personal experiences that shaped my path. Funny enough, the content you will read below is a summary of a voice conversation between ChatGPT and myself while driving. Unfortunately, up to end 2025, no LLM is capable of properly writing down faithful content that feels human.
From Foundations to Innovations: The Early Years of My PhD
As I began my journey at the University of Klagenfurt in December 2018, I stepped into a unique role as a university assistant. This position combined rigorous research, the PhD standard path, with the responsibility of teaching, offering a holistic academic experience.
The early phase of my PhD was dedicated to immersing myself in the intersection of machine learning and communication engineering. During this period, we focused on leveraging deep learning techniques, particularly Generative Adversarial Networks (GANs), to model and generate power line communication channels and noise. We also explored innovative methods to detect anomalies in smart grids, using frequency analysis and machine learning classifiers.
One of the pivotal early papers was a survey on integrating machine learning with power line communications, setting the stage for subsequent research. This foundational work not only highlighted the potential of deep learning in this domain but also paved the way for more advanced techniques, such as statistical analysis and mutual information, to understand complex data dependencies.
As we concluded 2019 and moved into 2020, the focus on applying these techniques to real-world power line scenarios continued to grow. By utilizing spectrograms and advanced generative models, we could create more accurate simulations of communication channels and noise patterns.
Unlocking the Fundamentals: Information Theory and Channel Capacity
As the world navigated the challenges of 2020, my research took a deeper dive into the core principles of information theory. Professor Tonello's challenge was to understand how we could accurately measure the capacity of any communication channel, a fundamental goal in ensuring reliable and efficient data transmission.
At the heart of this quest was the concept of mutual information, which quantifies the amount of information shared between the input and the output of a channel. Estimating mutual information, however, is no trivial task, especially in complex, multidimensional communication systems.
In late 2020, inspired by the limitations of traditional estimators, I began exploring how GANs could be harnessed to estimate these complex relationships. The discriminator in a GAN, at convergence, effectively captures density ratios between real and generated data distributions. This insight paved the way for a novel approach: using GANs to estimate mutual information by capturing these density ratios, thus bridging the gap between deep learning and information theory.
This journey led to the development of the Cortical framework, a pioneering method for cooperative channel capacity learning. It was a significant step forward in applying deep learning to fundamental information-theoretic problems, and it marked a key milestone in my PhD journey.
The Pinnacle Achievements at Princeton and NeurIPS
As my PhD journey progressed, a significant milestone was reached in late 2022 at Princeton. Collaborating with the renowned Professor H. Vincent Poor, we refined the Cortical framework, a groundbreaking cooperative method for channel capacity learning. This paper, published in IEEE Communications Letters, was a testament to our innovative approach and the power of deep learning in information theory.
Furthermore, in 2024, alongside my colleague Nicola, we presented our novel mutual information estimator at NeurIPS in Vancouver. The paper, titled Mutual Information Estimation via f-Divergence and Derangements, introduced a state-of-the-art discriminative approach that remains a benchmark in the field today. This achievement, presented at one of the most prestigious conferences, marked the pinnacle of my PhD and underscored the impact of combining deep learning with information theory.
The Defense
I defended my PhD on October 30, 2024 at the University of Klagenfurt.

The full dissertation, Deep Learning Models for Physical Layer Communications, is available on arXiv.

