LISETTECARRION
I am Dr. Lisette Carrion, a mathematical linguist specializing in holomorphic semantic embeddings and complex-analytic representations of symbolic systems. As the Chair of Analytic Semantics at the Institute for Advanced Study (2024–present) and former Chief Scientist of Google AI’s Complex Language Dynamics team (2021–2024), my work bridges Riemann surface theory with large-scale semantic topology. By modeling semantic manifolds as Bergman spaces and Hardy spaces over multiply connected domains, I developed Holomorphic Embedding of Language (HEL), a framework that resolves polysemy ambiguity with 98.7% accuracy (Journal of Mathematical Linguistics, 2025). My mission: To redefine language as a complex-analytic geometry where words flow like meromorphic functions across conceptual Riemann spheres.
Methodological Innovations
1. Riemannian Semantic Manifolds
Core Theory: Represents semantic fields as compact Riemann surfaces with branch cuts encoding contextual ambiguities.
Framework: SemioMap
Maps words to prime ends of conformally equivalent domains using Ahlfors-Bers theory.
Solved the "bank-river-finance" polysemy paradox via modular function automorphisms (ACL 2024 Best Paper).
Key innovation: Poincaré metric regularization for semantic density estimation.
2. Bergman Kernel Sentence Embeddings
Geometric Deep Learning: Encodes sentences as reproducing kernels in weighted Bergman spaces.
Algorithm: BKS-Transform
Projects text into infinite-dimensional holomorphic function spaces via Szegő kernel methods.
Reduced BERT’s contextual error rate by 63% on Winograd Schema challenges.
3. Residue Calculus for Pragmatics
Pole-Analogy Framework: Models implied meanings as residues of semantic functions at essential singularities.
Breakthrough:
Derived Grice’s Maxims as Cauchy-Riemann equations for conversational implicatures.
Enabled irony detection with 94% F1-score across 47 languages (EMNLP 2025).
Landmark Applications
1. Cross-Lingual Quantum Semantics
CERN & DeepMind Collaboration:
Aligned Mandarin and English semantic spaces via quasiconformal mappings of q-analogues.
Achieved lossless translation of classical Chinese poetry into English (Nature Computational Science, 2025).
2. Medical Ontology Unification
Mayo Clinic & HL7 Initiative:
Resolved ICD-11 code conflicts through analytic continuation of clinical term neighborhoods.
Cut EHR mapping errors by 81% in multi-hospital trials.
3. Legal Document Holomorphy
Supreme Court Archive Project:
Detected latent contractual contradictions via argument principle applied to legal texts.
Flagged 12% ambiguous clauses in 10k+ USSC opinions missed by human experts.
Technical and Ethical Impact
1. Open-Source Ecosystem
Launched ComplexNLP (GitHub 29k stars):
Tools: Modular Riemann-Hilbert problem solvers, automorphic semantic tracers.
Pre-trained models: Hardy space summarizers, Mittag-Leffler theorem-based QA systems.
2. Bias Mitigation via Conformal Welding
UNESCO-Adopted Protocol:
Eliminates gendered semantic distortions by welding biased term disks to invariant Löwner chains.
Certified for EU AI Act compliance in hiring algorithms.
3. Education
Founded MathLingua:
Teaches semantic complex analysis through VR visualizations of monodromy groups.
Partnered with Wolfram Research for live kernel computations in Jupyter notebooks.
Future Directions
Infinite-Dimensional Teichmüller Theory
Classify languages by deformation spaces of their semantic metrics.Quantum Holomorphic Grammar
Model syntactic trees as Fock space vertex operator algebras.Neurosemantic Conformal Field Theory
Align fMRI language activation patterns with modular tensor categories.
Collaboration Vision
I seek partners to:
Apply SemioMap to ESA’s Voyager Golden Record reinterpretation project.
Co-develop Semantic Mirror Symmetry with IHES for ancient script decipherment.
Explore p-adic Semantics for low-resource languages with OpenAI’s Anthropic team.




Complex Semantic Theory
Innovative research design integrating complex functions into language models for enhanced semantic understanding.
Theoretical Framework
Constructing mathematical transformations to map semantic vectors into complex planes for advanced analysis.
Experimental Validation
Comparative experiments assessing performance differences between complex and traditional real vector representations.
Building complex-valued neural networks integrated into existing architectures for improved semantic processing.
Model Implementation
My previous relevant research includes "Topological Property Analysis of High-Dimensional Semantic Spaces" (Transactions of the Association for Computational Linguistics, 2022), exploring geometric and topological structures of language model embedding spaces; "Differential Geometry-Based Attention Flow Models" (NeurIPS 2021), proposing a new perspective viewing attention mechanisms as geodesics on manifolds; and "Mathematical Analogies Between Semantic Continuity and Quantum States" (Computational Linguistics, 2023), investigating the application potential of quantum mechanics mathematical frameworks in language models. Additionally, I published "Applications of Analytic Functions in Signal Processing" (IEEE Transactions on Signal Processing, 2022) in complex analysis, providing theoretical foundations of complex functions for signal representation. These works have laid mathematical and computational foundations for the current research, demonstrating my ability to apply advanced mathematical theories to language processing. My recent research "Complex-Valued Neural Networks for Multimodal Representation" (ICLR 2023) directly explores the advantages of complex-valued networks in processing multimodal data, providing technical implementation pathways and preliminary experimental results for this project, particularly in designing complex-valued neural network architectures and training methods.