Publications

Main Papers

MEA.Seddik, M.Guillaud, R.Couillet, “When Random Tensors meet Random Matrices”, Annals of Applied Probability. [arxiv] [poster] [slides]

2024

H.Lebeau, MEA.Seddik, JHM.Goulart, “Performance Gaps in Multi-view Clustering under the Nested Matrix-Tensor Model”, ICLR’2024. [paper]

2023

MEA.Seddik, M.Achab, JHM.Goulart, M.Debbah, “A Nested Matrix-Tensor Model for Noisy Multi-view Clustering”. [preprint]

MEA.Seddik, JHM.Goulart, M.Guillaud, A.Decurninge, “Hotelling Deflation on Large Symmetric Spiked Tensors”. [preprint]

MEA.Seddik, M.Tiomoko, A.Decurninge, M. Panov, M.Guillaud, “Learning from Low Rank Tensor Data: A Random Tensor Theory Perspective”, UAI’2023 (Best Paper Runner-up). [preprint] [poster] [slides]

MEA.Seddik, M.Mahfoud, M.Debbah, “Optimizing Orthogonalized Tensor Deflation via Random Tensor Theory”. [arxiv]

2022

MEA.Seddik, M.Guillaud, A.Decurninge, JHM.Goulart, “On the Accuracy of Hotelling-Type Tensor Deflation: A Random Tensor Analysis”. [preprint] [slides]

MEA.Seddik, M.Guillaud, R.Couillet, “Quand les tenseurs aléatoires rencontrent les matrices aléatoires”, Gretsi’2022. [paper] [poster] [slides_fr] [slides_en]

M.Tiomoko, E.Schnoor, MEA.Seddik, I.Colin, A.Virmaux, “Deciphering Lasso-based Classification Through a Large Dimensional Analysis of the Iterative Soft-Thresholding Algorithm”, ICML’2022. [paper] [poster]

2021

MEA.Seddik, M.Guillaud, R.Couillet, “When Random Tensors meet Random Matrices”, Annals of Applied Probability. [arxiv] [poster] [slides]

MEA.Seddik, C.Wu, J.Lutzeyer, M.Vazirgiannis, “Node Feature Kernels Increase Graph Convolutional Network Robustness”, AISTATS’2022. [preprint] [poster] [slides]

A.Benzine, MEA.Seddik, J.Desmarais, “Deep Miner: A Deep and Multi-branch Network which Mines Rich and Diverse Features for Person Re-identification”. [preprint] [arxiv] [bibtex]

2020

MEA.Seddik, C.Louart, R.Couillet, M.Tamaazousti, “The Unexpected Deterministic and Universal Behavior of Large Softmax Classifiers”, AISTATS’2021. [paper] [poster] [bibtex]

MEA.Seddik, M.Tamaazousti, “Neural Networks Classify through the Class-wise Means of their Representations”, AAAI’2022. [paper] [poster]

MEA.Seddik, R.Couillet, M.Tamaazousti, “A Random Matrix Analysis of Learning with alpha-Dropout”, ICML’2020 Artemiss Workshop. [paper] [openreview] [slides] [bibtex]

MEA.Seddik, H.Essafi, A.Benzine, M.Tamaazousti, “Lightweight Neural Networks from PCA & LDA Based Distilled Dense Neural Networks”, ICIP’2020. [paper] [slides] [bibtex]

2019

MEA.Seddik, C.Louart, M.Tamaazousti, R.Couillet, “Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures”, ICML’2020. [paper] [arxiv] [slides] [bibtex]

MEA.Seddik, M.Tamaazousti, R.Couillet, “Why do Random Matrices Explain Learning? An Argument of Universality Offered by GANs”, Gretsi’2019. [paper] [slides] [poster] [bibtex]

MEA.Seddik, M.Tamaazousti, R.Couillet, “Kernel Random Matrices of Large Concentrated Data: The Example of GAN-generated Images”, ICASSP’2019. [paper] [conference] [poster] [bibtex]

2018

MEA.Seddik, M.Tamaazousti, R.Couillet, “A Kernel Random Matrix-Based Approach for Sparse PCA”, ICLR’2019. [paper] [poster] [bibtex]

MEA.Seddik, M.Tamaazousti, J.Lin, “Generative Collaborative Networks for Single Image SuperResolution”, Neurocomputing’2019. [paper] [journal] [arxiv] [bibtex]

J.Lin, MEA.Seddik, M.Tamaazousti, Y.Tamaazousti, A.Bartoli, “Deep Multi-class Adversarial Specularity Removal”, SCIA’2019. [paper] [bibtex]

J.Lin, MEA.Seddik, M.Tamaazousti, Y.Tamaazousti, A.Bartoli, “Suppression de spécularités par réseau adverse multi-classes”, ORASIS’2019. [paper] [bibtex]

2017

Y.Tamaazousti, H.Le Borgne, C.Hudelot, MEA.Seddik, M.Tamaazousti, “Learning More Universal Representations for Transfer-Learning”, TPAMI’2019. [paper] [bibtex] [code]

MEA.Seddik, V.Toldov, L.Clavier, N.Mitton, “From Outage Probability to ALOHA MAC Layer Performance Analysis in Distributed WSNs”, WCNC’2018. [paper] [slides] [bibtex]

My Thesis

Title: Random Matrix Theory for AI: From Theory to Practice. [slides] [manuscript]

Abstract: The AI era has driven the recent development of new algorithms and methods, often based on elementary principles allowing to handle large amounts of data of large dimensions. However, these large dimensional data impair the behavior of traditional methods that deserve to be revisited under the eye of more elaborate tools and methods. A better understanding of these methods in the bigdata regime indeed induces possibilities of improvements, thereby leading to the development of more efficient algorithms. The Random Matrix framework provides a powerful tool to understand and analyse the behavior of a wide range of ML methods under simple data models (such as the mixture of Gaussians model) in the large dimensional setting. My PhD thesis aims at going beyond the simple models hypothesis, by modelling data as Lipschitzally transformed Gaussian vectors which happen to be more appropriate for practical datasets (structured data, images, etc.).

Composition of the jury: