Deep operator networks (DeepONets) have shown significant potential in solving partial differential equations (PDEs) by utilizing neural networks to learn mappings between function spaces. However, their performance declines as system size and complexity increase. To address this, recent Latent DeepONet models have shown promise in creating accelerated surrogate models for these complex systems by learning operators in low-dimensional latent spaces, efficiently capturing system dynamics and effectively eliminating redundant features that could impede optimization. However, these Latent DeepONet architectures rely solely on data-driven training, require large amounts of data, and are not suitable for physics-informed training. To overcome these limitations, we propose a novel architecture for latent operator learning in a physics-informed manner, termed PI-Latent-NO. Our method employs a two-stacked DeepONet framework: the first DeepONet learns the latent representations from small datasets, while the second DeepONet recovers the solution back to the original space. This framework offers several advantages. First, it facilitates the efficient computation of spatial and temporal derivatives via forward-mode automatic differentiation, making physics-informed training possible thereby eliminating the need for large datasets and requiring only small datasets to learn the latent representation. Second, the architecture’s inherent separability in time and space provides significant advantages in memory and runtime. Unlike physics-informed Vanilla DeepONet models, which exhibit quadratic scaling, our framework achieves linear scaling, making it highly efficient and suitable for solving large physical problems. In conclusion, this PI-Latent-NO framework advances physics-informed operator learning in latent spaces, enabling robust and accurate parametric PDE learning with smaller datasets.