1. A Unified Paths Perspective for Pruning at Initialization Gebhart, Thomas, Saxena, Udit, and Schrater, Paul arXiv preprint arXiv:2101.10552

    We introduce the Path Kernel as the data-independent factor in a decomposition of the Neural Tangent Kernel and show the global structure of the Path Kernel can be computed efficiently. This Path Kernel decomposition separates the architectural effects from the data-dependent effects within the Neural Tangent Kernel.

  2. Sheaf Neural Networks Hansen, Jakob, and Gebhart, Thomas NeurIPS 2020 Workshop on Topological Data Analysis and Beyond

    We present a generalization of graph convolutional networks by generalizing the diffusion operation using the sheaf Laplacian.

  3. The Emergence of Higher-Order Structure in Scientific and Technological Knowledge Networks Gebhart, Thomas, and Funk, Russell J. arXiv preprint arXiv:2009.13620

    We use tools from algebraic topology to characterize the higher-order structure of knowledge networks in science and technology across scale and time.

  4. Applying SVM algorithms toward predicting host–guest interactions with cucurbit[7]uril Tabet, Anthony, Gebhart, Thomas, Wu, Guanglu, and al, Physical Chemistry Chemical Physics

    DFT, NMR, ITC, and cell confluence data are used to generate predictive algorithms of supramolecular binding to cucurbit[7]uril and experimentally validate these predictions.

  5. Path Homologies of Deep Feedforward Networks Chowhury, Samir, Gebhart, Thomas, Huntsman, Steve, and Yutin, Matvey Proceedings of the 18th IEEE International Conference on Machine Learning and Applications (ICMLA)

    We characterize two types directed homology–path homology and directed rips homology–with respect to feedforward neural networks’ parameter connectivity.

  6. Characterizing the Shape of Activation Space in Deep Neural Networks Gebhart, Thomas, Schrater, Paul, and Hylton, Alan Proceedings of the 18th IEEE International Conference on Machine Learning and Applications (ICMLA)

    A method for computing persistent homology of activation space within neural networks. We also provide some empirical results about how this topological perspective can inform us about how neural networks process inputs.

  7. Adversary Detection in Neural Networks via Persistent Homology Gebhart, Thomas, and Schrater, Paul arXiv preprint arXiv:1711.10056

    We show that a multi-scale analysis of neural network activations is able to capture the existence of adversarial inputs within neural networks.


  1. Sheaf Neural Networks

    Presented (virtually) at the NeurIPS 2020 Workshop on TDA and Beyond

  2. PCA, Dimensionality, and NSD

    Presented (virtually) at the NSD Mini Conference August 2020.

  3. Path Homologies of Deep Networks

    Presented at ICMLA 2019.