References

[1]
V. Sitzmann, J. Martel, A. Bergman, D. Lindell and G. Wetzstein. Implicit neural representations with periodic activation functions. Advances in Neural Information Processing Systems 33, 7462–7473 (2020).
[2]
S. Ramasinghe and S. Lucey. Beyond periodicity: Towards a unifying framework for activations in coordinate-mlps, arXiv preprint arXiv:2111.15135 (2021).
[3]
S. Ramasinghe, L. MacDonald and S. Lucey. On Regularizing Coordinate-MLPs, arXiv preprint arXiv:2202.00790 (2022).
[4]
S. Wang, H. Wang and P. Perdikaris. On the eigenvector bias of fourier feature networks: From regression to solving multi-scale pdes with physics-informed neural networks. Computer Methods in Applied Mechanics and Engineering 384, 113938 (2021).
[5]
L. Lu, P. Jin, G. Pang, Z. Zhang and G. E. Karniadakis. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence 3, 218–229 (2021).
[6]
D. B. Lindell, D. Van Veen, J. J. Park and G. Wetzstein. BACON: Band-limited coordinate networks for multiscale scene representation, arXiv preprint arXiv:0000.00000 (2021).
[7]
S. Wang, H. Wang, J. H. Seidman and P. Perdikaris. Random Weight Factorization Improves the Training of Continuous Neural Representations, arXiv preprint arXiv:2210.01274 (2022).
[8]
A. Rahimi and B. Recht. Random features for large-scale kernel machines. Advances in neural information processing systems 20 (2007).
[9]
M. Tancik, P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ramamoorthi, J. Barron and R. Ng. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in Neural Information Processing Systems 33, 7537–7547 (2020).
[10]
S. Wang, Y. Teng and P. Perdikaris. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM Journal on Scientific Computing 43, A3055–A3081 (2021).
[11]
R. Fathony, A. K. Sahu, D. Willmott and J. Z. Kolter. Multiplicative Filter Networks. In: International Conference on Learning Representations (2021).
[12]
R. Gnanasambandam, B. Shen, J. Chung, X. Yue and others. Self-scalable tanh (stan): Faster convergence and better generalization in physics-informed neural networks, arXiv preprint arXiv:2204.12589 (2022).