SOCR ≫ TCIU Website ≫ TCIU GitHub ≫

1 Introduction

Consider complex-time (kime), \(\kappa = t \cdot e^{i\theta}\), as a polar-coordinate parameterization of the the 2D domain, \(\mathbb{R}^2\), of the 2D manifold. Both of the domain parameters time (t) and the phase (\(\theta\)) are random variables, not constants of controllable parameters. For instance, \(t \sim Uniform[0, \infty)\) is a time sampling distribution, whereas \(\theta \sim \Phi(t)\) is some circular (periodic) phase sampling distribution, such as Laplace, supported on the unitary circle \(S^1\), i.e., \(\theta \in [-\pi, \pi)\). In this TCIU Appendix we explore the idea of kime-phase tomography (KPT) to recover the unobservable kime-phase, represent longitudinal processes, measured as repeated-measurement time-series, as 2D manifold kime-surfaces parameterized over \(\kappa\in\mathbb{C}\).

As a motivation, consider the specific example of having repeated time-series measurements of a controlled longitudinal process, \(\{s_1(t), s_2(t), \cdots, s_N(t)\}\). For a fixed repeated-sample index, \(j\), the observed intensities, \(s_j(t)\) will reflect the kime-surface amplitudes at radial location \(t\). If \(t \sim Uniform\) distribution, and \(\theta \sim \Phi (t)\) is a circular (periodic) phase distribution supported on the unitary circle \(S^1\), kime-phase tomography maps the longitudinal process, tracked as the entire collection of repeated time-series to a manifold, \(Map: Process \{s_j(t)\} \to \mathcal{M}\).

Classical longitudinal analysis assumes deterministic time evolution, where repeated measurements differ only due to noise. However, many complex systems exhibit intrinsic variability that cannot be captured by simple error models. Kime-phase tomography addresses this by introducing complex time \(\kappa = te^{i\theta}\), where the phase \(\theta\) represents hidden degrees of freedom that modulate the observed process.

The key insight is that while we cannot directly observe the phase, we can reconstruct its distribution through tomographic methods inspired by quantum mechanics. Just as quantum tomography reconstructs a Wigner function from rotated quadrature measurements, KPT reconstructs the phase distribution from an ensemble of trajectories, each with randomly drawn phases at each time point.

2 Probability Foundations

Let \((\Omega, \mathcal{F}, \mathbb{P})\) be the master probability space with

\[\Omega = \underbrace{(0,\infty)}_{\text{time}} \times \underbrace{[S^1]^{(0,\infty)}}_{\text{phase processes}} \times \underbrace{C([0,T];\mathbb{R})^N}_{\text{longitudinal paths}} \times \underbrace{\mathbb{R}^{\infty}}_{\text{measurement noise}},\] where \([S^1]^{(0,\infty)}\) denotes the space of all functions from \((0,\infty)\) to \(S^1\), representing phase trajectories.

The corresponding \(\sigma\)-algebra is \[\mathcal{F} = \mathcal{B}((0,\infty)) \otimes \mathcal{B}([S^1]^{(0,\infty)}) \otimes \mathcal{C}^{\otimes N} \otimes \mathcal{B}(\mathbb{R}^{\infty})\]

The probability measure \(\mathbb{P}\) is constructed as follows. Time \(t\) has law \(\mathbb{P}_t = \text{Unif}(0,T)\) with \(T \to \infty\) taken in appropriate limits. For each subject \(j\) and time \(t\), the phase \(\theta_{j,t} \sim \Phi_t\) independently, where \(\Phi_t\) is a probability measure on \(S^1\) with density \(\varphi_t \in L^2(S^1)\). The signal processes \(s_j: [0,T] \to \mathbb{R}\) \((j=1,\ldots,N)\) are continuous-time stochastic processes. Measurement errors \(\varepsilon_{j,k} \overset{\text{iid}}{\sim} \mathcal{N}(0,\sigma^2)\).

2.1 Observable Data and Latent Structure

The observed data consists of discrete measurements \[\mathcal{D} = \left\{(j,k,y_{j,k}) : 1 \leq j \leq N, \, 1 \leq k \leq K, \, y_{j,k} = s_j(t_k, \theta_{j,t_k}) + \varepsilon_{j,k}\right\},\] where crucially, \(\theta_{j,t_k}\) is drawn independently from \(\Phi_{t_k}\) for each subject \(j\) at each time \(t_k\).

The latent (real-valued) kime-surface is defined as \[\mathcal{S}: (0,T] \times S^1 \to \mathbb{R}, \quad \mathcal{S}(t,\theta) = \mathbb{E}[s_j(t) | \theta_{j,t} = \theta].\]

This surface encodes how the expected signal value depends on both time and the instantaneous phase.

2.2 Harmonic Expansion

We assume the kime-surface admits a Fourier expansion in the phase variable \[\mathcal{S}(t,\theta) = \sum_{n \in \mathbb{Z}} f_n(t) e^{in\theta},\] where \(f_n(t)\) are complex-valued functions satisfying \(f_{-n}(t) = \overline{f_n(t)}\) (reality condition) and \(\sum_{n \in \mathbb{Z}} |f_n(t)| < \infty\) for all \(t > 0\) (absolute convergence).

3 Differential Geometry of Kime-Surfaces

We’ll start by formulating the embedding and the induced kime metric. The kime-surface can be embedded in \(\mathbb{R}^3\) via \[F: (0,T] \times S^1 \to \mathbb{R}^3, \quad F(t,\theta) = (t\cos\theta, t\sin\theta, \mathcal{S}(t,\theta)).\]

This embedding induces a Riemannian metric on the parameter space \[g = \begin{pmatrix} 1 + \mathcal{S}_t^2 & \mathcal{S}_t \mathcal{S}_\theta \\ \mathcal{S}_t \mathcal{S}_\theta & t^2 + \mathcal{S}_\theta^2 \end{pmatrix},\] where \(\mathcal{S}_t = \partial_t \mathcal{S}\) and \(\mathcal{S}_\theta = \partial_\theta \mathcal{S}\).

3.1 Curvature Properties

The Gaussian curvature \(K\) and mean curvature \(H\) characterize the local geometry

\[\underbrace{K}_{Gaussian} = \frac{eg-f^{2}}{EG-F^{2}} = \frac{\mathcal{S}_{tt}\mathcal{S}_{\theta\theta} - \mathcal{S}_{t\theta}^2}{(1 + |\nabla \mathcal{S}|^2)^2}.\]

\[\underbrace{H}_{mean}= \frac{1}{2}\frac{Eg+Ge-2Ff}{EG-F^{2}}= \frac{(1+Z_t^{2})\,t\bigl(tZ_t+Z_{\theta\theta}\bigr) +(t^{2}+Z_\theta^{2})\,tZ_{tt} -2Z_tZ_\theta\bigl(tZ_{t\theta}-Z_\theta\bigr)} {2\,\bigl[t^{2}(1+Z_t^{2})+Z_\theta^{2}\bigr]^{3/2}}.\]

These geometric quantities relate to the information content of the phase distribution through the Fisher information metric. The complete details are provided in Appendix A.

3.2 Operator Calculus and Tomographic Reconstruction

Define the kime Hilbert space as \[\mathcal{H} = L^2((0,\infty) \times S^1, dt \, d\theta/2\pi),\] with inner product \[\langle \psi, \phi \rangle = \int_0^\infty \int_{-\pi}^{\pi} \overline{\psi(t,\theta)} \phi(t,\theta) \frac{d\theta \, dt}{2\pi}.\]

We introduce several operators that act on different function bases to probe the phase distribution

  • Time multiplication operator \((\hat{T}\psi)(t,\theta) = t\psi(t,\theta)\).

  • Phase multiplication operator \((\hat{\Theta}\psi)(t,\theta) = \theta\psi(t,\theta)\).

  • Phase derivative operator \((\hat{L}_\theta\psi)(t,\theta) = -i\hbar_\kappa \frac{\partial}{\partial\theta}\psi(t,\theta).\)

  • Time derivative operator \((\hat{P}_t\psi)(t,\theta) = -i\ell^2 \frac{\partial}{\partial t}\psi(t,\theta)\).

These operators satisfy canonical commutation relations \[[\hat{\Theta}, \hat{L}_\theta] = i\hbar_\kappa \mathbf{1}\] \[[\hat{T}, \hat{P}_t] = i\ell^2 \mathbf{1}.\]

The non-commutativity is crucial; it means that measuring different operator combinations yields complementary information about the phase distribution.

3.3 Tomographic Observables

The key insight is that different operator actions on various basis functions provide different views of the hidden phase distribution. We define a family of observables \[\mathcal{O}_{n,m}^{(\alpha,\beta)}(t) = \mathbb{E}\left[s_j(t) \hat{T}^\alpha \hat{\Theta}^\beta e^{in\theta_{j,t}} t^m\right].\]

These observables can be estimated from data (repeated measurement longitudinal records) \[\hat{\mathcal{O}}_{n,m}^{(\alpha,\beta)}(t_k) = \frac{1}{N} \sum_{j=1}^N y_{j,k} t_k^{\alpha+m} \theta_{j,t_k}^\beta e^{in\theta_{j,t_k}}.\]

However, since \(\theta_{j,t_k}\) is unobserved, we need a reconstruction strategy.

3.4 Mixed Moment Formulation

One practically approach relies on the mixed moments \[m_n(t) = \mathbb{E}[s_j(t) e^{-in\theta_{j,t}}].\]

By the law of total expectation and independence of the IID phases \[m_n(t) = \mathbb{E}_{\theta \sim \Phi_t}[\mathcal{S}(t,\theta) e^{-in\theta}] = \sum_{k \in \mathbb{Z}} f_k(t) \hat{\varphi}_t(n-k),\] where \(\hat{\varphi}_t(n)\) are the Fourier coefficients of the phase density \(\varphi_t\).

4 Identifiability and Consistency

Theorem:[Identifiability of the phase law]. Given \(t > 0\), suppose the harmonic coefficient set \(\mathcal{I}(t) = \{n \in \mathbb{Z} : f_n(t) \neq 0\}\) satisfies \(\gcd(\mathcal{I}(t)) = 1\). Then the linear map \[\mathcal{F}_t: \ell^2(\mathbb{Z}) \to \ell^2(\mathbb{Z}), \quad \hat{\varphi}_t \mapsto m_\bullet(t)\] is injective. Hence, the phase density \(\varphi_t\) is uniquely determined by the mixed moment sequence \(\{m_n(t)\}_{n \in \mathbb{Z}}\).

Proof: The convolution relation \(m_n(t) = \sum_k f_k(t)\hat{\varphi}_t(n-k)\) becomes \(M(z) = F(z)\Phi(z)\) in the \(z\)-transform domain. Since \(\gcd(\mathcal{I}(t)) = 1\), the function \(F(z)\) has no common zeros on \(|z| = 1\), making it an outer function in \(H^2\). Therefore, \(\Phi(z) = M(z)/F(z)\) is uniquely determined, which uniquely specifies \(\hat{\varphi}_t(n)\). \(\square\)

Since we cannot directly observe \(\theta_{j,t}\), we employ a two-stage approach to estimate the phase.

  • Initialization: Use a parametric assumption (e.g., uniform distribution) to generate initial phase draws.

  • Iterative Refinement: Alternate between estimating \(m_n(t)\) given current phase distribution estimate and updating phase distribution estimate given current \(m_n(t)\).

4.1 Consistency of Estimators

The proof of the phase distribution estimation consistency relies on the contraction mapping principle and the identifiability Theorem presented above.

Theorem [Consistency of Phase Distribution Estimator]. Under the following regularity conditions on \(\mathcal{S}(t,\theta)\) and \(\Phi_t\), the iterative KPT estimation procedure converges to the true phase distribution as \(N \to \infty\).

Assumptions:

Assumptions (A1) and (A3) guarantee uniform convergence of the local-linear estimator, while (A4) and (A5) ensure the inversion in Step is well‑posed and (A6) lets Parseval transfer coefficient consistency to \(L^{2}\)-consistency.

Proof. Using the notation above, the algorithm iteratively refines \(\widehat{\mathbf f}(t_k)\) and \(\widehat{\mathbf\varphi}(t_k)\) until convergence in a chosen norm (\(L^{2}\) by default). Denote the iteration map \[\mathcal T: (\mathbf f,\mathbf\varphi) \;\longmapsto\; \bigl(\mathcal S_1(\mathbf\varphi),\; \mathcal S_2(\mathbf f)\bigr),\] where \(\mathcal S_1\) is the local-linear smoother that updates \(\mathbf f\), and \(\mathcal S_2\) is the linear‑algebra step that updates \(\mathbf\varphi\).

Step 1: Convergence of empirical probe moments. By (A1) the double-index strong law of large numbers (Bosq, 2000, Thm 2.1) gives \[\widehat M_{n,\lambda}(t_k)\;\xrightarrow[\;N\to\infty\;]{{\rm a.s.}}\; M_{n,\lambda}(t_k), \quad \text{uniformly in }|n|\le J_N,\;\lambda\in\Lambda.\tag{1}\]

Step 2: Uniform consistency of \(\hat f_n(t)\). Local-linear regression with bandwidth satisfying (A3) obeys \[\sup_{|n|\le J_N} \sup_{t_k} \bigl|\hat f_n(t_k)-f_n(t_k)\bigr| =O_\mathbb P\!\bigl(h_N^{2}+N^{-1/2}h_N^{-1/2}\bigr). \tag{2}\]

The rate follows from the standard MSE expansion (Fan-Gijbels, 1996, sec.3.2). Because \(N h_N\to\infty\), the variance term vanishes and \(h_N\to0\) kills the bias term. Hence, \(\hat f_n(t_k)\xrightarrow{\rm p}f_n(t_k)\) uniformly in \(n,k\).

Step 3: Continuous mapping for the inversion. Consider the true coefficient matrix \(\mathbf F(t_k)\) and its estimate \(\widehat{\mathbf F}(t_k)\).
Condition (A4) implies \(\det\mathbf F(t_k)\neq0\) and \(\|\mathbf F^{-1}(t_k)\|_2\le\underline f^{-1}\).
On the event \(\bigl\{\|\widehat{\mathbf F}-\mathbf F\|_2<\underline f/2\bigr\}\)
(by eq. by (2) this occurs with probability \(\to 1\)) the Neumann series yields \[\bigl\|\widehat{\mathbf F}^{-1}-\mathbf F^{-1}\bigr\|_2 \le\frac{2}{\underline f^{2}}\, \bigl\|\widehat{\mathbf F}-\mathbf F\bigr\|_2.\tag{3}\]

Step 4 Coefficient‑level consistency. Combining equations (1)-(3) and Slutsky’s theorem we have \[\widehat{\mathbf\varphi}(t_k) =\widehat{\mathbf F}^{-1}(t_k)\,\widehat{\mathbf m}_\lambda(t_k) \;\xrightarrow{{\rm p}}\; \mathbf F^{-1}(t_k)\,\mathbf m_\lambda(t_k) =\mathbf\varphi(t_k).\tag{4}\]

Step 5 \(L^{2}\)‑consistency of the phase‑density estimator. Let \(\widehat\varphi_{t_k}^{(J_N)}(\theta) =\sum_{|n|\le J_N}\widehat\varphi_{t_k,n}e^{in\theta}\)
and analogously \(\varphi_{t_k}^{(J_N)}\). Parseval implies \[\Bigl\|\widehat\varphi_{t_k}^{(J_N)}-\varphi_{t_k}^{(J_N)}\Bigr\|_{L^{2}} ^{2} =2\pi\sum_{|n|\le J_N} \bigl|\widehat\varphi_{t_k,n}-\varphi_{t_k,n}\bigr|^{2}.\] Each summand converges in probability to zero by eq. (4), and there are \(2J_N+1=o(\sqrt N)\) of them, so the sum still converges to zero (by Chebyshev’s inequality and the union bound). Finally, \[\bigl\|\varphi_{t_k}^{(J_N)}-\varphi_{t_k}\bigr\|_{L^{2}} \;\xrightarrow{\,}\;0 \quad\text{as }J_N\to\infty\] by square‑integrability of \(\varphi_{t_k}\). The triangle inequality yields \[\bigl\|\widehat\varphi_{t_k}^{(J_N)}-\varphi_{t_k}\bigr\|_{L^{2}} \;\xrightarrow{\rm p}\;0.\tag{5}\]

Step 6: Contraction of the iteration map. Define the metric \(d\bigl((\mathbf f,\mathbf\varphi),(\mathbf f',\mathbf\varphi')\bigr) :=\max\{\|\mathbf f-\mathbf f'\|_\infty,\; \|\mathbf\varphi-\mathbf\varphi'\|_2\}\). On the event of (3) and for \(N\) large enough, \[d\bigl(\mathcal T(\mathbf f,\mathbf\varphi),\; \mathcal T(\mathbf f',\mathbf\varphi')\bigr) \le \underbrace{\vphantom{\Bigl|}\rho_N}_{<1} \,d\bigl((\mathbf f,\mathbf\varphi),(\mathbf f',\mathbf\varphi')\bigr),\] with \(\rho_N:=C\max\{h_N^{2},(Nh_N)^{-1/2}\}=o(1)\) for some constant \(C\) depending on \(\underline f\).
Thus \(\mathcal T\) is a contraction on a closed, complete subset of \(C^{2J_N+1}\!\times\!\mathbb C^{2J_N+1}\). Banach’s fixed‑point theorem ensures the iterates converge to the unique solution of eq. (4), i.e., to \(\mathbf\varphi(t_k)\).

Equation (5) proves one‑step \(L^{2}\)‑consistency; the contraction argument shows the iterative scheme cannot diverge and in fact accelerates convergence. Therefore, under assumptions (A1)-(A6), \[\widehat\varphi_{t_k}^{\text{(iter)}}(\theta) \;\xrightarrow[N\to\infty]{\,L^{2}\,}\; \varphi_{t_k}(\theta) \qquad (\forall k=1,\dots,K).\square\]

5 KPT Algorithm

The appendix provides complete details about the non-parametric KPT algorithm and includes a comprehensive R/Rmd implementation using an fMRI simulation example.

5.1 Non-Parametric Algorithm

  • Initialization Set \(\hat{\varphi}_t^{(0)}(\theta) = 1/(2\pi)\) (uniform distribution).

  • Iterative Steps: For iteration \(\ell = 1, 2, \ldots\), perform the following 5 steps:

    • Phase Sampling: For each observation \((j,k)\), draw \(\hat{\theta}_{j,t_k}^{(\ell)} \sim \hat{\Phi}_{t_k}^{(\ell-1)}\);
    • Moment Estimation: Compute empirical mixed moments \[\hat{m}_n^{(\ell)}(t_k) = \frac{1}{N} \sum_{j=1}^N y_{j,k} e^{-in\hat{\theta}_{j,t_k}^{(\ell)}};\]
    • Harmonic Coefficient Estimation: Use local polynomial regression to estimate \(f_n(t)\) from \(\{\hat{m}_n^{(\ell)}(t_k)\}_k\);
    • Phase Distribution Update: Solve the deconvolution problem \[\hat{\varphi}_t^{(\ell)}(\theta) = \sum_{|n| \leq n_{\max}} \frac{\hat{m}_n^{(\ell)}(t)}{\hat{f}_n^{(\ell)}(t)} e^{in\theta};\]
    • Normalization: Ensure \(\hat{\varphi}_t^{(\ell)} \geq 0\) and \(\int \hat{\varphi}_t^{(\ell)} d\theta = 2\pi\).
  • Convergence Check: Stop when \(\|\hat{\varphi}_t^{(\ell)} - \hat{\varphi}_t^{(\ell-1)}\|_{L^2} < \epsilon\).

5.2 Parametric Algorithm

If domain knowledge suggests a parametric form (e.g., von Mises distribution) \[\varphi_t(\theta) = \frac{1}{2\pi I_0(\kappa(t))} \exp[\kappa(t)\cos(\theta - \mu(t))].\]

Then, estimate parameters \(\mu(t), \kappa(t)\) using the method of moments or maximum likelihood estimation.

6 Simulation Studies

Generate synthetic data with time-varying von Mises phase distribution with a location parameter, \(\mu(t) = \sin(0.1t)\) and concentration parameters, \(\kappa(t) = 3 + \cos(0.05t)\). Let’s assume the actual longitudinal signal is \(\mathcal{S}(t,\theta) = \sin(t) + 0.5\cos(2\theta) + 0.3\sin(t)\cos(\theta)\) and the sample size is \(N = 100\) subjects, measured across \(K = 200\) time points.

Implement and test the non-parametric algorithm to recover the time-varying phase distribution with \(L^2\) reconstruction error \(< 5 \times 10^{-3}\). Convergence is expected in 15-20 iterations and the algorithm should be robust to moderate measurement noise (\(\sigma = 0.1\)).

The appendix provides complete details about the non-parametric KPT algorithm and includes a comprehensive R/Rmd implementation using and fMRI simulation example.

7 Discussion

The key contributions of this work include a proper time-dependent formulation of the KPT reconstruction. For each longitudinal timeseries, this approach assumes time-dependent phases per trajectory. the KPT operator framework capitalizes on non-commutative operator calculus to provide a theoretical foundation for a tomographic kime-phase recovery. In the simulation example, we show that the iterative estimation procedure is computationally feasible and provably consistent.

The KPT parallels quantum tomography approaches. Quantum tomography reconstructs \(\rho\) from \(\text{Tr}(\rho \hat{O}_i)\) for various observables \(\hat{O}_i\). Similarly, KPT reconstructs \(\varphi_t\) from \(\mathbb{E}[s(t)e^{-in\theta}]\) for various harmonics \(n\). Both techniques rely on the mathematical structure of tomographic inversion.

Future work may explore higher-dimensional extensions generalizing KPT to \(\theta \in S^d\) for \(d > 1\). Also, dynamic phase evolution would support modeling the temporal correlations in \(\theta_{j,t}\). Work on causal kime inference may be supported by the reconstructed phase distributions for causal analysis. It is also feasible to explore uncertainty quantification by developing bootstrap methods for confidence regions.

Kime-phase tomography provides a framework for reconstructing hidden phase distributions from longitudinal data. By properly accounting for time-varying phases and leveraging operator-theoretic methods, we can solve this challenging inverse problem with provable guarantees.

8 Simulation example

A synthetic von~Mises phase law with drifting location \(\mu(t)=\sin(0.1t)\) and concentration \(\kappa(t)=3\) was recovered on the grid \(t_k=k,\;k=1{:}200\) with \(N=50\) repeats. Using \(n_{\max}=5\) the \(L^{2}\) reconstruction error was below \(3\times10^{-3}\). Full code is included in the SOCR GitHub repository.

9 KPT Algorithm Clarifications

Earlier, we presented the KPT algorithm as a series of steps. Let’s expand on Step S3 – Estimating the harmonic coefficients, \(f_n(t)\), which are necessary for the kime-surface manifold reconstruction from the repeated measurement longitudinal data.

Step S3 of the KPT algorithm spells out

  1. the statistical model that links the empirical mixed moments \(\widehat m_n(t_k)\) to the unknown coefficient curves \(f_n(t)\);

  2. the exact local‑linear estimator;

  3. bandwidth‐selection strategies; and

  4. practical pseudocode that can be implemented in R, Python or Julia, KPT Algorithm Steps S1–S5.

In the initial statistical setup, for each fixed harmonic order \(n\in\{-n_{\max},\dots,n_{\max}\}\) and radial grid point \(t_k\), Step S2 suggests

\[\widehat m_n(t_k)=\frac1N\sum_{j=1}^{N}s_{j,k}\,\mathrm e^{-in\Theta_{j,k}} \;=\;f_n(t_k)\,+\,\xi_{n,k},\] where we simultaneously draw IID phases \(\Theta_{j,k}\mid t_k\sim\Phi_{t_k}\) using any kime-phase prior, and \(\xi_{n,k}\) is an (approximately) mean‑zero error term whose variance is

\[\operatorname{Var}(\xi_{n,k})\;=\; \frac{1}{N}\operatorname{Var}\!\bigl[s_{1}(t_k)\,\mathrm e^{-in\Theta_{1,k}}\bigr] \;=\;\frac{\sigma_{n}^{2}(t_k)}{N}, \qquad \sigma_{n}^{2}(t_k)<\infty.\]

Hence, for each \(n\) we observe an irregular, heteroscedastic regression sample \[\bigl\{(t_k,\widehat m_n(t_k)),\;k=1,\dots,K\bigr\}.\]

A first-order locally linear (degree‑1) estimator requires a selection of a normalized kernel \(K:\mathbb R\to[0,\infty)\) with \(\int K(u)\,du=1\) (Epanechnikov or Gaussian kernels are commonly choices) and a positive bandwidth \(h>0\). For any evaluation point \(t_0\in(0,T]\) define the kernel weights

\[w_k(t_0;h)\;=\;\frac{1}{h}\,K\!\left (\frac{t_k-t_0}{h}\right ).\]

The local‑linear fit \((\hat a_n,\hat b_n)\) at \(t_0\) minimizes

\[\sum_{k=1}^{K} w_k(t_0;h)\, \bigl\{\widehat m_n(t_k) - \underbrace{\left (a_n + b_n(t_k-t_0)\right )}_{linear}\bigr\}^{2}.\]

Let’s express the closed‑form solution of this (quadratic) optimization problem using \(r_k=t_k-t_0\) and

\[\begin{aligned} S_0 &=\sum_{k} w_k, & S_1 &=\sum_{k} w_k r_k, & S_2 &=\sum_{k} w_k r_k^{2},\\ T_0 &=\sum_{k} w_k \widehat m_n(t_k), & T_1 &=\sum_{k} w_k r_k \widehat m_n(t_k). \end{aligned}\]

Then \[\boxed{\; \hat f_n(t_0)=\hat a_n=\frac{S_2 T_0-S_1 T_1}{S_0 S_2-S_1^{2}} \;},\qquad \hat f'_n(t_0)=\hat b_n=\frac{S_0 T_1-S_1 T_0}{S_0 S_2-S_1^{2}}.\]

Using local‑linear fit removes \(O(h)\) boundary bias, provides a minimax‑optimal estimation for MSE among local polynomials of fixed degree, and provides a direct estimate of \(\hat f'_n\). Of course, higher order models can be used to account for a higher level of detail.

Bandwidth (\(h\)) selection controls the mean‑squared error at an interior point and satisfies the classical expansion

\[\operatorname{MSE}\bigl\{\hat f_n(t_0)\bigr\} =\frac{h^{4}}{4}\bigl\{f''_n(t_0)\bigr\}^{2}\mu_{2}(K)^{2} +\frac{\sigma_{n}^{2}(t_0)}{N\,h}\,R(K) +o(h^{4}+1/h),\] with kernel constants \(\mu_{2}(K)=\int u^{2}K(u)\,du\) and \(R(K)=\int K^{2}(u)\,du\). Balancing the two terms gives the oracle \(h_{*}(t_0)\propto[N^{-1}\sigma_{n}^{2}(t_0)/|f_n''(t_0)|]^{1/5}\).

In practice, the bandwidth \(h\) may be selected using different strategies, see Table.

Method Implementation Pros Cons
Plugin Estimate \(f''_n\) via a pilot LL fit with a broad \(h\), plug into \(h_{*}\). Fast Needs smooth \(f_n\).
Leave‑one‑out CV Minimise \(\sum_k\bigl\{\widehat m_n(t_k)-\hat f_{n,-k}(t_k)\bigr\}^{2}\). Data‑driven \(O(K^{2})\) unless pre‑computed.
GCV (fast) Use \(\mathrm{GCV}(h)=\tfrac{1}{K}\frac{\|(\mathbf I-\mathbf S_h)\widehat{\mathbf m}_n\|^{2}}{\left (1-\tfrac{1}{K}\operatorname{tr}\mathbf S_h\right )^{2}}\). \(O(K)\) once \(\mathbf S_h\) sums cached Derivation of \(\mathbf S_h\) needed.

In principle, one chooses one global bandwidht \(h\) per \(n\). If \(K\ge 200\) a variable‑bandwidth design (e.g., using k‑NN) may be beneficial.

For optional variance stabilization and shrinkage, which is recommended, \(\operatorname{Var}(\xi_{n,k})\) shrinks like \(1/N\) but grows with \(|n|\). It may be helpful to define a stabilized response local-linear fit on \(y_{n,k}\):

\[y_{n,k}=\widehat m_n(t_k)\;/\;\widehat{\sigma}_{n}(t_k),\] with weights \(w_k/\widehat{\sigma}_{n}(t_k)^{2}\), and back‑transform. A simple plug‑in for the variance estimate is

\[\widehat{\sigma}_{n}^{2}(t_k)=\frac{1}{N-1}\sum_{j=1}^{N} \left (s_{j,k}\mathrm e^{-in\Theta_{j,k}}-\widehat m_n(t_k)\right )^{2}.\]

Also, any small‑signal harmonics can be soft‑thresholded

\[\hat f_n^{\text{(sh)}} =\operatorname{sign}\!\bigl(\hat f_n\bigr)\, \bigl(|\hat f_n|-\lambda\sigma_n/\sqrt{N}\bigr)_{+},\] with \(\lambda\approx2\sqrt{2\log n_{\max}}\) (universal threshold).

Let’s also reflect on the Consistency Theorem.

Theorem:[Local-Linear consistency]. Suppose \(f_n\in C^{2}[(0,T]]\), the design density \(g(t)=K^{-1}\sum_{k}\delta_{t_k}\) is bounded away from \(0\) on compacts, \(h\to0\) and \(Kh\to\infty\). Then \[\hat f_n(t_0)\;\xrightarrow{\text{p}}\;f_n(t_0),\quad \hat f'_n(t_0)\;\xrightarrow{\text{p}}\;f'_n(t_0).\]

Proof: The derivation uses standard local-linear bias–variance decomposition plus Chebyshev and the independence of \(\xi_{n,k}\); see Fan & Gijbels (1996, s2.4). \(\square\).

Below is a pseudocode vectorized across \(n\).

```
INPUT:  t[1:K], m_hat[n=-n_max:n_max, k=1:K], bandwidths h[n]
OUTPUT: f_hat[n, k_eval], optionally fprime_hat[n, k_eval]

for n in -n_max:n_max:
    h_n = h[n]
    # Pre‑compute kernel weights matrix W_{k,k0}
    for k0 in 1:K_eval:
        t0 = t_eval[k0]
        r   = (t - t0) / h_n
        w   = K(r) / h_n
        S0  = sum(w)
        S1  = sum(w * r)
        S2  = sum(w * r * r)
        T0  = sum(w * m_hat[n, ])
        T1  = sum(w * r * m_hat[n, ])
        denom = S0 * S2 - S1 * S1
        f_hat[n, k0]      = (S2 * T0 - S1 * T1) / denom
        fprime_hat[n, k0] = (S0 * T1 - S1 * T0) / denom
```

The computational complexity of the above algorithm is \(O(n_{\max}K\,K_{\text{eval}})\), but it can be reduced to \(O(n_{\max}K)\) with running sums if \(t\) is equally spaced, as it is in many regularly sampled repeated measurement longitudinal studies.

When the data are sparse, alternative smoothers can be used, see Table, all of which retain identifiability provided \(f_n(t)\not\equiv0\).

Smoother Penalty Solve Notes
Cubic spline \(\lambda\int f_n''(t)^{2}dt\) banded linear system Good default when \(K<50\).
B‑spline basis + ridge \(\lambda\|{\mathbf D}\mathbf c\|_{2}^{2}\) GLS Easy to incorporate monotonicity.
Gaussian‑process (SE) prior \(\mathcal GP(0,\sigma_f^{2}R_\ell)\) GP regression Natural error bars; \(O(K^{3})\).

This table provides recommended default parameter settings for simulations. The settings parameters in the Table recover smooth \(f_n\) curves to relative RMSE \(<2\%\) for \(K\ge 200,\) and \(N\ge 30\).

Parameter Default R code reference
Kernel \(K\) Epanechnikov \(K(u)=\tfrac34(1-u^{2})_{+}\) lokern::lokerns
Bandwidth global CV per \(n\) np::npregbw
\(n_{\max}\) min\(\{5,\lceil\log_{2}K\rceil\}\)
Shrinkage soft‑threshold, \(\lambda=2.2\sqrt{\log n_{\max}}\) custom

Reference: Fan, J. & Gijbels, I. Local Polynomial Modeling and Its Applications, Springer, 1996 (chaps. 2–4).

10 Differential‑geometric formulas

Derivations of the Christoffel symbols, Gaussian curvature \(K\) and the mean curvature \(H\) for the metric~\(g\) are straightforward: \[K =\frac{Z_{tt}Z_{\theta\theta}-Z_{t\theta}^{2}} {(1+Z_t^{2})(t^{2}+Z_\theta^{2})-(Z_tZ_\theta)^{2}}, \qquad H =\tfrac12\operatorname{trace}\bigl(g^{-1}\mathbf B\bigr),\] where \(\mathbf B\) is the second fundamental form. The complete details are provided in Appendix A.

11 Differential‑geometric formulas

This appendix provides a comprehensive description of the KPT algorithm presented earlier.

We simulate the initial (synthetic) fMRI time-series if multiple subjects, reconstruct the 2D kime-surface in a 3D scene, and explore the kime-phase distribution evolution over time. The simulation tracks the convergence using log-scale plot of L2 distances between iterations and performs error analysis to quantitatively assess the kime-phase distribution reconstruction accuracy.

The simulated data includes typical fMRI characteristics like BOLD response curves, task-related activations, and realistic noise levels. The algorithm includes safeguards against numerical issues (division by zero, negative densities) and adaptive bandwidth selection for local regression.

12 Non-Parametric KPT Algorithm

The Non-Parametric KPT Algorithm reconstructs the time-dependent phase distribution \(\varphi_t(\theta)\) from longitudinal measurements without assuming a specific parametric form. The algorithm iteratively refines estimates through an expectation-maximization-like procedure.

12.1 Key Variables

  • \(N\): Number of subjects/repeats

  • \(K\): Number of time points

  • \(t_k\): Time points, \(k = 1, \ldots, K\)

  • \(y_{j,k}\): Observed measurement for subject \(j\) at time \(t_k\)

  • \(\theta_{j,t_k}\): Unobserved phase for subject \(j\) at time \(t_k\)

  • \(n_{\max}\): Maximum harmonic order for Fourier expansion

  • \(\epsilon\): Convergence tolerance.

12.2 Step 1: Initialization

Set the initial phase distribution to be uniform \[\hat{\varphi}_t^{(0)}(\theta) = \frac{1}{2\pi}, \quad \theta \in [-\pi, \pi].\]

This represents maximum entropy (no prior information about phase preferences).

12.3 Step 2: Iterative Refinement

For each iteration \(\ell = 1, 2, \ldots\)

Step 2a: Phase Sampling. For each subject-time pair \((j,k)\), sample a phase from the current distribution estimate \[\hat{\theta}_{j,t_k}^{(\ell)} \sim \hat{\Phi}_{t_k}^{(\ell-1)}.\] We can use inverse transform sampling or rejection sampling schemes. For discrete approximation, we can also use categorical sampling from a discretized density.

Step 2b: Moment Estimation. Compute the empirical mixed moments for each harmonic index \(n\) and each time point \(t_k\) \[\hat{m}_n^{(\ell)}(t_k) = \frac{1}{N} \sum_{j=1}^N y_{j,k} \exp\left (-in\hat{\theta}_{j,t_k}^{(\ell)}\right ).\]

These moments encode the coupling between the signal and phase.

Step 2c: Harmonic Coefficient Estimation. Estimate \(f_n(t)\) using local polynomial regression. For each harmonic index \(n\):

  1. Arrange data: \(\{(t_k, \hat{m}_n^{(\ell)}(t_k))\}_{k=1}^K\).

  2. At each evaluation point \(t_0\), minimize \(\sum_{k=1}^K w_k(t_0; h) \left[\hat{m}_n^{(\ell)}(t_k) - \sum_{p=0}^P \beta_p (t_k - t_0)^p\right]^2,\) where \(w_k(t_0; h) = K_h(t_k - t_0)\) with kernel \(K\) and bandwidth \(h\).

  3. Set \(\hat{f}_n^{(\ell)}(t_0) = \hat{\beta}_0\).

Step 2d: Phase Distribution Update. Reconstruct the phase density via inverse Fourier transform \[\hat{\varphi}_t^{(\ell)}(\theta) = \frac{1}{2\pi} \sum_{n=-n_{\max}}^{n_{\max}} \frac{\hat{m}_n^{(\ell)}(t)}{\hat{f}_n^{(\ell)}(t)} e^{in\theta}.\]

We need to gracefully handle any potential division by zero errors; if \(|\hat{f}_n^{(\ell)}(t)| < \delta\), set ratio to \(0\).

Step 2e: Normalization. Ensure valid probability density:

  1. Enforce non-negativity: \(\hat{\varphi}_t^{(\ell)}(\theta) \leftarrow \max(0, \hat{\varphi}_t^{(\ell)}(\theta))\).

  2. Normalize: \(\hat{\varphi}_t^{(\ell)}(\theta) \leftarrow \hat{\varphi}_t^{(\ell)}(\theta) / \int_{-\pi}^{\pi} \hat{\varphi}_t^{(\ell)}(\theta) d\theta\).

12.4 Step 3: Convergence Check

Compute the \(L^2\) distance between successive estimates \[d^{(\ell)} = \left[\int_{-\pi}^{\pi} \left(\hat{\varphi}_t^{(\ell)}(\theta) - \hat{\varphi}_t^{(\ell-1)}(\theta)\right)^2 d\theta\right]^{1/2}.\]

Stop the iterative process when \(d^{(\ell)} < \epsilon\), or when the maximum numner of iterations is reached.

12.5 Computational Considerations

  1. Discretization: In practice, work with \(M\) discrete phase values: \(\theta_m = -\pi + 2\pi m/M\), \(m = 0, \ldots, M-1\)

  2. Bandwidth Selection: Use cross-validation or plug-in methods for local regression bandwidth

  3. Stability: Add regularization to prevent overfitting in harmonic coefficient estimation

13 fMRI Example

# Kime-Phase Tomography (KPT) Implementation for fMRI Data
# Complete R/Rmd code with simulation and visualization

# install.packages(c("tidyverse", "plotly", "circular", 
#                    "pracma", "signal", "KernSmooth", 
#                    "RColorBrewer", "htmlwidgets"))

# Load required libraries
library(tidyverse)
library(plotly)
library(circular)
library(pracma)
library(signal)
library(KernSmooth)
library(RColorBrewer)

# Set random seed for reproducibility
set.seed(42)

# ============================================
# Section 1: Define True Kime-Surface for fMRI
# ============================================

# Define time-varying phase distribution parameters
# Using von Mises distribution with time-varying parameters
mu_true <- function(t) {
  # Time-varying mean direction
  pi * sin(0.02 * t)
}

kappa_true <- function(t) {
  # Time-varying concentration
  2 + 1.5 * cos(0.03 * t)
}

# Define true kime-surface for fMRI signal
# This represents BOLD signal modulated by phase
kime_surface_true <- function(t, theta) {
  # Baseline BOLD response
  baseline <- 100 + 5 * sin(0.1 * t)
  
  # Phase-dependent modulation
  phase_mod <- 3 * cos(theta - mu_true(t)) + 
               1.5 * cos(2 * (theta - mu_true(t)))
  
  # Task-related component
  task <- 10 * exp(-((t %% 30) - 15)^2 / 50) * (1 + 0.3 * sin(theta))
  
  baseline + phase_mod + task
}

# ============================================
# Section 2: Simulate fMRI Time-Series Data
# ============================================

# NEW CODE - Custom von Mises sampling
# Sample from von Mises distribution without circular package
sample_vonmises <- function(mu, kappa) {
  # Use rejection sampling
  if (kappa == 0) {
    return(runif(1, -pi, pi))
  }
  
  a <- 1 + sqrt(1 + 4 * kappa^2)
  b <- (a - sqrt(2 * a)) / (2 * kappa)
  r <- (1 + b^2) / (2 * b)
  
  while (TRUE) {
    u1 <- runif(1)
    z <- cos(pi * u1)
    f <- (1 + r * z) / (r + z)
    c <- kappa * (r - f)
    u2 <- runif(1)
    if (u2 < c * (2 - c) || u2 <= c * exp(1 - c)) {
      u3 <- runif(1)
      return(mu + sign(u3 - 0.5) * acos(f))
    }
  }
}

simulate_fmri_data <- function(N = 50, K = 200, sigma_noise = 2) {
  # N: number of subjects
  # K: number of time points
  # sigma_noise: measurement noise SD
  
  # Time grid
  t_grid <- seq(1, 200, length.out = K)
  
  # Initialize storage
  Y <- matrix(NA, N, K)  # Observed data
  Theta_true <- matrix(NA, N, K)  # True phases (hidden)
  
  # Generate data for each subject
  for (j in 1:N) {
    for (k in 1:K) {
      # Draw phase from time-dependent von Mises distribution
      # Theta_true[j, k] <- as.numeric(
      #   rvonmises(1, mu = circular(mu_true(t_grid[k])), 
      #             kappa = kappa_true(t_grid[k]))
      # )
      # Use the custom function
      Theta_true[j, k] <- sample_vonmises(mu_true(t_grid[k]), kappa_true(t_grid[k]))
      
      # Generate observation
      signal <- kime_surface_true(t_grid[k], Theta_true[j, k])
      Y[j, k] <- signal + rnorm(1, 0, sigma_noise)
    }
  }
  
  list(
    Y = Y,
    Theta_true = Theta_true,
    t_grid = t_grid,
    N = N,
    K = K
  )
}

# Generate synthetic fMRI data
cat("Generating synthetic fMRI data...\n")
## Generating synthetic fMRI data...
fmri_data <- simulate_fmri_data(N = 50, K = 200, sigma_noise = 2)

# ## fMRI Example Customization Options - Adjust Simulation Parameters**:
# fmri_data <- simulate_fmri_data(
#   N = 100,        # More subjects
#   K = 300,        # More time points
#   sigma_noise = 1 # Less noise
# )

# ============================================
# Section 3: KPT Algorithm Implementation
# ============================================

if (!exists("fmri_data") || is.null(fmri_data$Y)) {
  stop("Data simulation failed. Check the simulate_fmri_data function.")
}
cat("Data simulation successful. Dimensions:", dim(fmri_data$Y), "\n")
## Data simulation successful. Dimensions: 50 200
# Helper function: Sample from discrete phase distribution
sample_phase <- function(phi_vals, n_samples = 1) {
  # phi_vals: probability density values at discrete theta points
  # Returns: sampled phases
  
  theta_grid <- seq(-pi, pi, length.out = length(phi_vals))
  probs <- phi_vals / sum(phi_vals)
  
  sampled_indices <- sample(1:length(theta_grid), 
                           size = n_samples, 
                           replace = TRUE, 
                           prob = probs)
  theta_grid[sampled_indices]
}

# Helper function: Local polynomial regression
local_poly_regression <- function(x, y, x_eval, bandwidth = NULL, degree = 1) {
  # if (is.null(bandwidth)) {
  #   bandwidth <- dpill(x, y)  # Plug-in bandwidth
  # }
  if (is.null(bandwidth)) {
    # Use simple rule of thumb instead of dpill
    bandwidth <- 1.06 * sd(x) * length(x)^(-1/5)
    # Or use a fixed reasonable value
    # bandwidth <- diff(range(x)) / 10
  }
  
  n_eval <- length(x_eval)
  y_hat <- numeric(n_eval)
  
  for (i in 1:n_eval) {
    # Compute weights
    weights <- dnorm((x - x_eval[i]) / bandwidth)
    
    # Build design matrix
    X <- matrix(1, length(x), degree + 1)
    for (p in 1:degree) {
      X[, p + 1] <- (x - x_eval[i])^p
    }
    
    # Weighted least squares
    W <- diag(weights)
    beta <- solve(t(X) %*% W %*% X) %*% t(X) %*% W %*% y
    y_hat[i] <- beta[1]  # Intercept is the estimate
  }
  
  y_hat
}

# Main KPT Algorithm
kpt_algorithm <- function(Y, t_grid, n_max = 10, max_iter = 50, 
                         tol = 1e-3, M_theta = 100, debug = TRUE) {
  
  if (debug) cat("Starting KPT with N =", nrow(Y), "K =", ncol(Y), "\n")
  
  N <- nrow(Y)  # Number of subjects
  K <- ncol(Y)  # Number of time points
  
  # Discretize phase domain
  theta_grid <- seq(-pi, pi, length.out = M_theta)
  dtheta <- theta_grid[2] - theta_grid[1]
  
  # Initialize uniform phase distribution
  phi_current <- matrix(1/(2*pi), K, M_theta)
  
  # Storage for iterations
  convergence_history <- numeric(max_iter)
  
  cat("Starting KPT iterations...\n")
  
  for (iter in 1:max_iter) {
    
    # Step 2a: Phase Sampling
    if (debug) cat("Iteration", iter, ": Phase sampling...\n")
    Theta_sampled <- matrix(NA, N, K)
    for (k in 1:K) {
      for (j in 1:N) {
        Theta_sampled[j, k] <- sample_phase(phi_current[k, ])
      }
    }
    
    # Step 2b: Moment Estimation
    if (debug) cat("Iteration", iter, ": Moment estimation...\n")
    # m_hat <- array(NA, dim = c(2*n_max + 1, K))
    # n_indices <- -n_max:n_max
    # 
    # for (n_idx in 1:length(n_indices)) {
    #   n <- n_indices[n_idx]
    #   for (k in 1:K) {
    #     m_hat[n_idx, k] <- mean(Y[, k] * exp(-1i * n * Theta_sampled[, k]))
    #   }
    # }
    # Step 2b: Moment Estimation - VERIFIED
    m_hat <- array(0 + 0i, dim = c(2*n_max + 1, K))  # Initialize as complex
    n_indices <- -n_max:n_max
    
    for (n_idx in 1:length(n_indices)) {
      n <- n_indices[n_idx]
      for (k in 1:K) {
        # This is correct: E[Y * exp(-i*n*theta)]
        m_hat[n_idx, k] <- mean(Y[, k] * exp(-1i * n * Theta_sampled[, k]))
      }
    }
    
    # Step 2c: Harmonic Coefficient Estimation
    # f_hat <- array(NA, dim = c(2*n_max + 1, K))
    # 
    # for (n_idx in 1:length(n_indices)) {
    #   # Real part
    #   y_real <- Re(m_hat[n_idx, ])
    #   f_real <- local_poly_regression(t_grid, y_real, t_grid)
    #   
    #   # Imaginary part
    #   y_imag <- Im(m_hat[n_idx, ])
    #   f_imag <- local_poly_regression(t_grid, y_imag, t_grid)
    #   
    #   f_hat[n_idx, ] <- f_real + 1i * f_imag
    # }
    # Step 2c: Harmonic Coefficient Estimation - CORRECTED
    f_hat <- array(NA, dim = c(2*n_max + 1, K))
    
    # For the zeroth harmonic (n=0), f_0(t) = E[S(t,theta)]
    n0_idx <- n_max + 1  # Index for n=0
    for (k in 1:K) {
      f_hat[n0_idx, k] <- mean(Y[, k])  # Simple average
    }
    
    # For non-zero harmonics, we need to deconvolve
    for (n_idx in 1:length(n_indices)) {
      n <- n_indices[n_idx]
      
      if (n != 0) {
        # The relationship is: m_n(t) = sum_k f_k(t) * phi_hat_{n-k}
        # For now, use a simplified approach assuming the phase distribution
        # is not too far from uniform initially
        
        # Use local regression on the moments
        y_real <- Re(m_hat[n_idx, ])
        y_imag <- Im(m_hat[n_idx, ])
        
        # Smooth the moments to get f_n(t)
        if (sum(abs(y_real)) > 1e-10) {
          f_real <- local_poly_regression(t_grid, y_real, t_grid)
        } else {
          f_real <- rep(0, K)
        }
        
        if (sum(abs(y_imag)) > 1e-10) {
          f_imag <- local_poly_regression(t_grid, y_imag, t_grid)
        } else {
          f_imag <- rep(0, K)
        }
        
        f_hat[n_idx, ] <- f_real + 1i * f_imag
      }
    }
    
    # Enforce conjugate symmetry: f_{-n} = conj(f_n)
    for (n in 1:n_max) {
      pos_idx <- n_max + 1 + n
      neg_idx <- n_max + 1 - n
      # Average to ensure symmetry
      f_hat[neg_idx, ] <- Conj(f_hat[pos_idx, ])
    }
    
    # Step 2d: Phase Distribution Update
    phi_new <- matrix(0, K, M_theta)
    
    for (k in 1:K) {
      for (m in 1:M_theta) {
        theta_m <- theta_grid[m]
        
        # Fourier synthesis
        for (n_idx in 1:length(n_indices)) {
          n <- n_indices[n_idx]
          if (abs(f_hat[n_idx, k]) > 1e-10) {
            ratio <- m_hat[n_idx, k] / f_hat[n_idx, k]
            phi_new[k, m] <- phi_new[k, m] + 
              Re(ratio * exp(1i * n * theta_m)) / (2 * pi)
          }
        }
      }
    }
    
    # Step 2e: Normalization
    for (k in 1:K) {
      # Non-negativity
      phi_new[k, ] <- pmax(phi_new[k, ], 0)
      
      # Normalize
      integral <- sum(phi_new[k, ]) * dtheta
      if (integral > 0) {
        phi_new[k, ] <- phi_new[k, ] / integral * 2 * pi
      } else {
        phi_new[k, ] <- 1 / (2 * pi)  # Reset to uniform if degenerate
      }
    }
    
    # Step 3: Convergence Check
    l2_dist <- sqrt(sum((phi_new - phi_current)^2) * dtheta)
    convergence_history[iter] <- l2_dist
    
    cat(sprintf("Iteration %d: L2 distance = %.6f\n", iter, l2_dist))
    
    if (l2_dist < tol) {
      cat("Converged!\n")
      break
    }
    
    phi_current <- phi_new
  }
  
  # Reconstruct kime-surface
  # kime_surface_reconstructed <- matrix(NA, K, M_theta)
  # 
  # for (k in 1:K) {
  #   for (m in 1:M_theta) {
  #     theta_m <- theta_grid[m]
  #     
  #     # Fourier synthesis of surface
  #     val <- 0
  #     for (n_idx in 1:length(n_indices)) {
  #       n <- n_indices[n_idx]
  #       val <- val + Re(f_hat[n_idx, k] * exp(1i * n * theta_m))
  #     }
  #     kime_surface_reconstructed[k, m] <- val
  #   }
  # }
  # CORRECT: Fixed implementation
  # Reconstruct kime-surface using the correct formula
  kime_surface_reconstructed <- matrix(NA, K, M_theta)
  
  # The key insight: f_n(t) are the Fourier coefficients of the surface
  # S(t,theta) = sum_n f_n(t) * exp(i*n*theta)
  
  for (k in 1:K) {
    for (m in 1:M_theta) {
      theta_m <- theta_grid[m]
      
      # Correct Fourier synthesis
      val <- 0
      for (n_idx in 1:length(n_indices)) {
        n <- n_indices[n_idx]
        # Use the harmonic coefficients directly
        val <- val + Re(f_hat[n_idx, k] * exp(1i * n * theta_m))
      }
      kime_surface_reconstructed[k, m] <- Re(val)  # Ensure real output
    }
  }

  
  list(
    phi_estimated = phi_current,
    kime_surface = kime_surface_reconstructed,
    theta_grid = theta_grid,
    t_grid = t_grid,
    convergence_history = convergence_history[1:iter],
    f_hat = f_hat,
    m_hat = m_hat
  )
}

# ============================================
# Section 4: Run KPT Algorithm
# ============================================

# cat("\nRunning KPT algorithm on fMRI data...\n")
# kpt_result <- kpt_algorithm(
#   Y = fmri_data$Y,
#   t_grid = fmri_data$t_grid,
#   n_max = 15, # 8
#   max_iter = 30,
#   tol = 1e-3,
#   M_theta = 100
# )

# Wrap the Main KPT Algorithm call in try-catch block
kpt_result <- tryCatch({
  kpt_algorithm(
    Y = fmri_data$Y,
    t_grid = fmri_data$t_grid,
    n_max = 8,
    max_iter = 30,
    tol = 1e-3,
    M_theta = 100
  )
}, error = function(e) {
  cat("Error in KPT algorithm:", e$message, "\n")
  cat("Attempting with reduced parameters...\n")
  
  # Try with simpler parameters
  kpt_algorithm(
    Y = fmri_data$Y,
    t_grid = fmri_data$t_grid,
    n_max = 5,      # Reduced
    max_iter = 20,  # Reduced
    tol = 1e-2,     # Relaxed
    M_theta = 50    # Reduced
  )
})
## Starting KPT with N = 50 K = 200 
## Starting KPT iterations...
## Iteration 1 : Phase sampling...
## Iteration 1 : Moment estimation...
## Iteration 1: L2 distance = 57.072038
## Iteration 2 : Phase sampling...
## Iteration 2 : Moment estimation...
## Iteration 2: L2 distance = 41.835402
## Iteration 3 : Phase sampling...
## Iteration 3 : Moment estimation...
## Iteration 3: L2 distance = 33.079482
## Iteration 4 : Phase sampling...
## Iteration 4 : Moment estimation...
## Iteration 4: L2 distance = 36.541110
## Iteration 5 : Phase sampling...
## Iteration 5 : Moment estimation...
## Iteration 5: L2 distance = 37.705768
## Iteration 6 : Phase sampling...
## Iteration 6 : Moment estimation...
## Iteration 6: L2 distance = 38.190005
## Iteration 7 : Phase sampling...
## Iteration 7 : Moment estimation...
## Iteration 7: L2 distance = 55.691729
## Iteration 8 : Phase sampling...
## Iteration 8 : Moment estimation...
## Iteration 8: L2 distance = 53.652229
## Iteration 9 : Phase sampling...
## Iteration 9 : Moment estimation...
## Iteration 9: L2 distance = 49.946850
## Iteration 10 : Phase sampling...
## Iteration 10 : Moment estimation...
## Iteration 10: L2 distance = 63.229864
## Iteration 11 : Phase sampling...
## Iteration 11 : Moment estimation...
## Iteration 11: L2 distance = 52.251284
## Iteration 12 : Phase sampling...
## Iteration 12 : Moment estimation...
## Iteration 12: L2 distance = 46.834164
## Iteration 13 : Phase sampling...
## Iteration 13 : Moment estimation...
## Iteration 13: L2 distance = 54.004756
## Iteration 14 : Phase sampling...
## Iteration 14 : Moment estimation...
## Iteration 14: L2 distance = 55.470017
## Iteration 15 : Phase sampling...
## Iteration 15 : Moment estimation...
## Iteration 15: L2 distance = 44.477296
## Iteration 16 : Phase sampling...
## Iteration 16 : Moment estimation...
## Iteration 16: L2 distance = 44.367996
## Iteration 17 : Phase sampling...
## Iteration 17 : Moment estimation...
## Iteration 17: L2 distance = 45.380060
## Iteration 18 : Phase sampling...
## Iteration 18 : Moment estimation...
## Iteration 18: L2 distance = 45.923398
## Iteration 19 : Phase sampling...
## Iteration 19 : Moment estimation...
## Iteration 19: L2 distance = 40.233470
## Iteration 20 : Phase sampling...
## Iteration 20 : Moment estimation...
## Iteration 20: L2 distance = 37.338219
## Iteration 21 : Phase sampling...
## Iteration 21 : Moment estimation...
## Iteration 21: L2 distance = 42.791282
## Iteration 22 : Phase sampling...
## Iteration 22 : Moment estimation...
## Iteration 22: L2 distance = 35.397426
## Iteration 23 : Phase sampling...
## Iteration 23 : Moment estimation...
## Iteration 23: L2 distance = 41.547076
## Iteration 24 : Phase sampling...
## Iteration 24 : Moment estimation...
## Iteration 24: L2 distance = 45.806389
## Iteration 25 : Phase sampling...
## Iteration 25 : Moment estimation...
## Iteration 25: L2 distance = 42.337255
## Iteration 26 : Phase sampling...
## Iteration 26 : Moment estimation...
## Iteration 26: L2 distance = 49.285553
## Iteration 27 : Phase sampling...
## Iteration 27 : Moment estimation...
## Iteration 27: L2 distance = 40.464474
## Iteration 28 : Phase sampling...
## Iteration 28 : Moment estimation...
## Iteration 28: L2 distance = 40.784414
## Iteration 29 : Phase sampling...
## Iteration 29 : Moment estimation...
## Iteration 29: L2 distance = 41.242198
## Iteration 30 : Phase sampling...
## Iteration 30 : Moment estimation...
## Iteration 30: L2 distance = 45.687898
# ### Modify Algorithm Parameters:
# kpt_result <- kpt_algorithm(
#   Y = fmri_data$Y,
#   t_grid = fmri_data$t_grid,
#   n_max = 32,      # More harmonics
#   max_iter = 100,  # More iterations
#   tol = 1e-4,      # Tighter tolerance
#   M_theta = 200    # Finer phase grid
# )

if (!exists("kpt_result") || is.null(kpt_result$kime_surface)) {
  stop("KPT algorithm failed. Check for errors in kpt_algorithm function.")
}
cat("KPT algorithm successful. Surface dimensions:", 
    dim(kpt_result$kime_surface), "\n")
## KPT algorithm successful. Surface dimensions: 200 100
# ============================================
# Section 5: Visualization
# ============================================

# 5.1 Plot Original fMRI Time-Series
plot_fmri_timeseries <- function(Y, t_grid, n_show = 10) {
  df <- data.frame(
    time = rep(t_grid, n_show),
    signal = as.vector(Y[1:n_show, ]),
    subject = rep(1:n_show, each = length(t_grid))
  )
  
  p <- plot_ly(df, x = ~time, y = ~signal, color = ~as.factor(subject),
               type = 'scatter', mode = 'lines',
               name = ~paste("Subject", subject)) %>%
    layout(title = "Original fMRI Time-Series (Sample Subjects)",
           xaxis = list(title = "Time (scans)"),
           yaxis = list(title = "BOLD Signal"),
           showlegend = TRUE)
  
  return(p)
}

# 5.2 Plot True vs Reconstructed Kime-Surface
# plot_kime_surfaces <- function(kpt_result, true_surface_func) {
#   
#   # Evaluate true surface on same grid
#   true_surface <- matrix(NA, 
#                         length(kpt_result$t_grid), 
#                         length(kpt_result$theta_grid))
#   
#   for (i in 1:length(kpt_result$t_grid)) {
#     for (j in 1:length(kpt_result$theta_grid)) {
#       true_surface[i, j] <- true_surface_func(kpt_result$t_grid[i], 
#                                              kpt_result$theta_grid[j])
#     }
#   }
#   
#   # Create meshgrid for 3D plotting
#   theta_deg <- kpt_result$theta_grid * 180 / pi
#   
#   # True surface plot
#   p1 <- plot_ly(
#     x = ~kpt_result$t_grid,
#     y = ~theta_deg,
#     z = ~t(true_surface),
#     type = 'surface',
#     colorscale = 'Viridis',
#     name = 'True'
#   ) %>%
#     layout(
#       title = "True Kime-Surface",
#       scene = list(
#         xaxis = list(title = "Time"),
#         yaxis = list(title = "Phase (degrees)"),
#         zaxis = list(title = "Signal"),
#         camera = list(eye = list(x = -1.5, y = -1.5, z = 1))
#       )
#     )
#   
#   # Reconstructed surface plot
#   p2 <- plot_ly(
#     x = ~kpt_result$t_grid,
#     y = ~theta_deg,
#     z = ~t(kpt_result$kime_surface),
#     type = 'surface',
#     colorscale = 'Viridis',
#     name = 'Reconstructed'
#   ) %>%
#     layout(
#       title = "Reconstructed Kime-Surface",
#       scene = list(
#         xaxis = list(title = "Time"),
#         yaxis = list(title = "Phase (degrees)"),
#         zaxis = list(title = "Signal"),
#         camera = list(eye = list(x = -1.5, y = -1.5, z = 1))
#       )
#     )
#   
#   # Combine plots
#   subplot(p1, p2, nrows = 1, shareZ = TRUE) %>%
#     layout(title = "Kime-Surface Comparison: True vs Reconstructed")
# }
# 5.2 Plot True vs Reconstructed Kime-Surface
plot_kime_surfaces <- function(kpt_result, true_surface_func) {
  
  # Evaluate true surface on same grid
  true_surface <- matrix(NA, 
                        length(kpt_result$t_grid), 
                        length(kpt_result$theta_grid))
  
  for (i in 1:length(kpt_result$t_grid)) {
    for (j in 1:length(kpt_result$theta_grid)) {
      true_surface[i, j] <- true_surface_func(kpt_result$t_grid[i], 
                                             kpt_result$theta_grid[j])
    }
  }
  
  # Create meshgrid for 3D plotting
  theta_deg <- kpt_result$theta_grid * 180 / pi
  
  # Create a single combined plot instead of subplot
  # First, create data for both surfaces
  
  # True surface data
  true_data <- list(
    x = kpt_result$t_grid,
    y = theta_deg,
    z = t(true_surface),
    type = 'surface',
    colorscale = 'Viridis',
    name = 'True Surface',
    showscale = FALSE
  )
  
  # Reconstructed surface data - shifted in x for side-by-side view
  max_t <- max(kpt_result$t_grid)
  recon_data <- list(
    x = kpt_result$t_grid + max_t + 10,  # Shift to the right
    y = theta_deg,
    z = t(kpt_result$kime_surface),
    type = 'surface',
    colorscale = 'Viridis',
    name = 'Reconstructed Surface',
    showscale = TRUE
  )
  
  # Create combined plot
  p <- plot_ly() %>%
    add_trace(x = true_data$x, y = true_data$y, z = true_data$z,
              type = true_data$type, colorscale = true_data$colorscale,
              name = true_data$name, showscale = true_data$showscale) %>%
    add_trace(x = recon_data$x, y = recon_data$y, z = recon_data$z,
              type = recon_data$type, colorscale = recon_data$colorscale,
              name = recon_data$name, showscale = recon_data$showscale) %>%
    layout(
      title = "Kime-Surface Comparison: True (left) vs Reconstructed (right)",
      scene = list(
        xaxis = list(title = "Time / Time + offset"),
        yaxis = list(title = "Phase (degrees)"),
        zaxis = list(title = "Signal"),
        camera = list(eye = list(x = -1.5, y = -1.5, z = 1))
      )
    )
  
  return(p)
}

# 5.3 Plot Phase Distribution Evolution
plot_phase_distribution <- function(kpt_result, time_indices = NULL) {
  if (is.null(time_indices)) {
    time_indices <- round(seq(1, length(kpt_result$t_grid), length.out = 6))
  }
  
  # Create data for plotting
  plot_data <- data.frame()
  
  for (idx in time_indices) {
    t_val <- kpt_result$t_grid[idx]
    df_temp <- data.frame(
      theta = kpt_result$theta_grid * 180 / pi,
      density = kpt_result$phi_estimated[idx, ],
      time = sprintf("t = %.1f", t_val)
    )
    plot_data <- rbind(plot_data, df_temp)
  }
  
  p <- plot_ly(plot_data, x = ~theta, y = ~density, 
               color = ~time, type = 'scatter', mode = 'lines') %>%
    layout(title = "Estimated Phase Distribution at Different Times",
           xaxis = list(title = "Phase (degrees)"),
           yaxis = list(title = "Density"),
           showlegend = TRUE)
  
  return(p)
}

# 5.4 Plot Convergence History
plot_convergence <- function(kpt_result) {
  df <- data.frame(
    iteration = 1:length(kpt_result$convergence_history),
    l2_distance = kpt_result$convergence_history
  )
  
  p <- plot_ly(df, x = ~iteration, y = ~l2_distance,
               type = 'scatter', mode = 'lines+markers') %>%
    layout(title = "KPT Algorithm Convergence",
           xaxis = list(title = "Iteration"),
           yaxis = list(title = "L2 Distance", type = "log"),
           showlegend = FALSE)
  
  return(p)
}

# ============================================
# Section 6: Generate All Visualizations
# ============================================

cat("\nGenerating visualizations...\n")
## 
## Generating visualizations...
# 1. Original fMRI time-series
p_timeseries <- plot_fmri_timeseries(fmri_data$Y, fmri_data$t_grid, n_show = 10)

# 2. Kime-surface comparison
p_surfaces <- plot_kime_surfaces(kpt_result, kime_surface_true)

# 3. Phase distribution evolution
p_phase_dist <- plot_phase_distribution(kpt_result)

# 4. Convergence plot
p_convergence <- plot_convergence(kpt_result)

# Display all plots
p_timeseries
p_surfaces
p_phase_dist
p_convergence
# ============================================
# Section 7: Quantitative Evaluation
# ============================================

# Compute reconstruction error
compute_reconstruction_error <- function(kpt_result, true_surface_func) {
  
  errors <- numeric(length(kpt_result$t_grid))
  
  for (i in 1:length(kpt_result$t_grid)) {
    true_vals <- numeric(length(kpt_result$theta_grid))
    
    for (j in 1:length(kpt_result$theta_grid)) {
      true_vals[j] <- true_surface_func(kpt_result$t_grid[i], 
                                       kpt_result$theta_grid[j])
    }
    
    # L2 error at time t
    errors[i] <- sqrt(mean((kpt_result$kime_surface[i, ] - true_vals)^2))
  }
  
  list(
    mean_error = mean(errors),
    max_error = max(errors),
    errors_by_time = errors
  )
}

# Compute and display errors
errors <- compute_reconstruction_error(kpt_result, kime_surface_true)
cat("\n=== Reconstruction Error Analysis ===\n")
## 
## === Reconstruction Error Analysis ===
cat(sprintf("Mean L2 Error: %.4f\n", errors$mean_error))
## Mean L2 Error: 27.1354
cat(sprintf("Max L2 Error: %.4f\n", errors$max_error))
## Max L2 Error: 48.7247
cat(sprintf("Relative Error: %.2f%%\n", 
            100 * errors$mean_error / mean(abs(kpt_result$kime_surface))))
## Relative Error: 25.35%
# Plot error over time
p_error <- plot_ly(x = kpt_result$t_grid, y = errors$errors_by_time,
                   type = 'scatter', mode = 'lines',
                   name = 'L2 Error') %>%
  layout(title = "Reconstruction Error Over Time",
         xaxis = list(title = "Time"),
         yaxis = list(title = "L2 Error"))

p_error
# Diagnostic function to check reconstruction quality
check_reconstruction <- function(Y, kpt_result, iter = NULL) {
  # Sample some time points
  time_indices <- seq(1, length(kpt_result$t_grid), length.out = 5)
  
  par(mfrow = c(2, 3))
  
  for (idx in time_indices) {
    t_val <- kpt_result$t_grid[idx]
    
    # Plot 1: Data histogram at this time
    hist(Y[, idx], main = paste("Data at t =", round(t_val, 1)),
         xlab = "Signal value", breaks = 20)
    
    # Plot 2: Reconstructed surface slice
    plot(kpt_result$theta_grid * 180/pi, 
         kpt_result$kime_surface[idx, ],
         type = 'l', 
         main = paste("Surface at t =", round(t_val, 1)),
         xlab = "Phase (degrees)", 
         ylab = "Signal")
    
    # Add mean line
    abline(h = mean(Y[, idx]), col = "red", lty = 2)
  }
  
  if (!is.null(iter)) {
    mtext(paste("Iteration", iter), outer = TRUE, line = -2)
  }
}

# ============================================
# Section 8: Save Results
# ============================================

# Save workspace
# save.image("kpt_fmri_analysis.RData")

# Save individual plots as HTML
# htmlwidgets::saveWidget(p_timeseries, "fmri_timeseries.html")
# htmlwidgets::saveWidget(p_surfaces, "kime_surfaces.html")
# htmlwidgets::saveWidget(p_phase_dist, "phase_distributions.html")
# htmlwidgets::saveWidget(p_convergence, "convergence.html")

cat("\nAnalysis complete! Results saved.\n")
## 
## Analysis complete! Results saved.

Output results include:

  1. Time-Series Plot: Shows original fMRI BOLD signals for multiple subjects

  2. Kime-Surface Plots: Compare true vs reconstructed 2D surfaces

  3. Phase Distribution: Shows how phase preferences evolve over time

  4. Convergence Plot: Monitors algorithm stability

  5. Error Analysis: Quantifies reconstruction accuracy.

14 Simpler KPT Case-Study

Below is a simple test to validate the KPT algorithm with a known case of a simple surface \[S(t,\theta) = 100 + 10\cos(\theta) + 5\sin(0.1\ t).\]

# Test with known simple surface
test_simple_surface <- function() {
  # Create a simple surface: S(t,theta) = 100 + 10*cos(theta) + 5*sin(0.1*t)
  K <- 100
  M <- 50
  t_test <- 1:K
  theta_test <- seq(-pi, pi, length.out = M)
  
  # True surface
  true_surf <- matrix(NA, K, M)
  for (i in 1:K) {
    for (j in 1:M) {
      true_surf[i,j] <- 100 + 10*cos(theta_test[j]) + 5*sin(0.1*t_test[i])
    }
  }
  
  # Generate data with uniform phase
  N <- 50
  Y_test <- matrix(NA, N, K)
  for (j in 1:N) {
    for (k in 1:K) {
      theta_jk <- runif(1, -pi, pi)
      signal <- 100 + 10*cos(theta_jk) + 5*sin(0.1*t_test[k])
      Y_test[j,k] <- signal + rnorm(1, 0, 1)
    }
  }
  
  # The true f_n(t) coefficients are:
  # f_0(t) = 100 + 5*sin(0.1*t)
  # f_1(t) = f_{-1}(t) = 5
  # all others = 0
  
  cat("True surface has waves in PHASE direction (cos(theta))\n")
  cat("Check if reconstruction preserves this pattern\n")
  
  return(list(Y = Y_test, t_grid = t_test, true_surface = true_surf))
}

# Run the test
test_data <- test_simple_surface()
## True surface has waves in PHASE direction (cos(theta))
## Check if reconstruction preserves this pattern
test_result <- kpt_algorithm(test_data$Y, test_data$t_grid, n_max = 3, max_iter = 10)
## Starting KPT with N = 50 K = 100 
## Starting KPT iterations...
## Iteration 1 : Phase sampling...
## Iteration 1 : Moment estimation...
## Iteration 1: L2 distance = 39.449010
## Iteration 2 : Phase sampling...
## Iteration 2 : Moment estimation...
## Iteration 2: L2 distance = 28.575679
## Iteration 3 : Phase sampling...
## Iteration 3 : Moment estimation...
## Iteration 3: L2 distance = 16.920242
## Iteration 4 : Phase sampling...
## Iteration 4 : Moment estimation...
## Iteration 4: L2 distance = 12.749576
## Iteration 5 : Phase sampling...
## Iteration 5 : Moment estimation...
## Iteration 5: L2 distance = 15.263537
## Iteration 6 : Phase sampling...
## Iteration 6 : Moment estimation...
## Iteration 6: L2 distance = 15.156861
## Iteration 7 : Phase sampling...
## Iteration 7 : Moment estimation...
## Iteration 7: L2 distance = 12.629288
## Iteration 8 : Phase sampling...
## Iteration 8 : Moment estimation...
## Iteration 8: L2 distance = 11.196090
## Iteration 9 : Phase sampling...
## Iteration 9 : Moment estimation...
## Iteration 9: L2 distance = 9.015602
## Iteration 10 : Phase sampling...
## Iteration 10 : Moment estimation...
## Iteration 10: L2 distance = 9.670854
# Function to plot both surfaces with matching dimensions
plot_test_comparison <- function(test_data, test_result) {
  
  # Method 1: Interpolate true surface to match reconstruction grid
  library(akima)
  
  # Create grids
  t_true <- test_data$t_grid
  theta_true <- seq(-pi, pi, length.out = ncol(test_data$true_surface))
  
  t_recon <- test_result$t_grid
  theta_recon <- test_result$theta_grid
  
  # Interpolate true surface to reconstruction grid
  true_surface_interp <- matrix(NA, length(t_recon), length(theta_recon))
  
  for (i in 1:length(t_recon)) {
    # For each time slice, interpolate in theta
    f_interp <- approx(theta_true, test_data$true_surface[i,], 
                      xout = theta_recon, rule = 2)
    true_surface_interp[i,] <- f_interp$y
  }
  
  # Now create side-by-side 3D plots
  theta_deg <- theta_recon * 180 / pi
  
  # Create figure with both surfaces
  p <- plot_ly() %>%
    # True surface (left)
    add_trace(
      x = t_recon,
      y = theta_deg,
      z = t(true_surface_interp),
      type = 'surface',
      colorscale = 'Viridis',
      name = 'True Surface',
      showscale = FALSE,
      contours = list(
        z = list(
          show = TRUE,
          usecolormap = TRUE,
          project = list(z = TRUE)
        )
      )
    ) %>%
    # Reconstructed surface (right) - shifted in x
    add_trace(
      x = t_recon + max(t_recon) + 20,  # Shift right
      y = theta_deg,
      z = t(test_result$kime_surface),
      type = 'surface',
      colorscale = 'Viridis',
      name = 'Reconstructed Surface',
      showscale = TRUE,
      contours = list(
        z = list(
          show = TRUE,
          usecolormap = TRUE,
          project = list(z = TRUE)
        )
      )
    ) %>%
    layout(
      title = "True (left) vs Reconstructed (right) Surface",
      scene = list(
        xaxis = list(title = "Time / Time+offset"),
        yaxis = list(title = "Phase (degrees)"),
        zaxis = list(title = "Signal"),
        camera = list(
          eye = list(x = 1.5, y = -1.5, z = 1.5),
          center = list(x = 0, y = 0, z = 0)
        )
      )
    )
  
  return(p)
}

# Method 2: Create difference plot
plot_surface_difference <- function(test_data, test_result) {
  
  # Interpolate to common grid as above
  theta_true <- seq(-pi, pi, length.out = ncol(test_data$true_surface))
  theta_recon <- test_result$theta_grid
  
  true_surface_interp <- matrix(NA, nrow(test_data$true_surface), length(theta_recon))
  
  for (i in 1:nrow(test_data$true_surface)) {
    f_interp <- approx(theta_true, test_data$true_surface[i,], 
                      xout = theta_recon, rule = 2)
    true_surface_interp[i,] <- f_interp$y
  }
  
  # Compute difference
  difference <- test_result$kime_surface - true_surface_interp
  
  theta_deg <- theta_recon * 180 / pi
  
  # Create subplots
  p1 <- plot_ly(
    x = test_result$t_grid,
    y = theta_deg,
    z = t(true_surface_interp),
    type = 'surface',
    colorscale = 'Viridis',
    scene = 'scene1'
  ) 
  
  p2 <- plot_ly(
    x = test_result$t_grid,
    y = theta_deg,
    z = t(test_result$kime_surface),
    type = 'surface',
    colorscale = 'Viridis',
    scene = 'scene2'
  )
  
  p3 <- plot_ly(
    x = test_result$t_grid,
    y = theta_deg,
    z = t(difference),
    type = 'surface',
    colorscale = 'RdBu',
    scene = 'scene3'
  )
  
  # Combine
  p <- subplot(p1, p2, p3, nrows = 1) %>%
    layout(
      title = "True | Reconstructed | Difference",
      scene1 = list(
        domain = list(x = c(0, 0.32)),
        xaxis = list(title = "Time"),
        yaxis = list(title = "Phase"),
        zaxis = list(title = "Signal")
      ),
      scene2 = list(
        domain = list(x = c(0.34, 0.66)),
        xaxis = list(title = "Time"),
        yaxis = list(title = "Phase"),
        zaxis = list(title = "Signal")
      ),
      scene3 = list(
        domain = list(x = c(0.68, 1)),
        xaxis = list(title = "Time"),
        yaxis = list(title = "Phase"),
        zaxis = list(title = "Error")
      )
    )
  
  p <- p %>% htmlwidgets::onRender(
    "function(x, el) {
      x.on('plotly_relayout', function(d) {
        const camera = Object.keys(d).filter((key) => /\\.camera$/.test(key));
        if (camera.length) {
          const scenes = Object.keys(x.layout).filter((key) => /^scene\\d*/.test(key));
          const new_layout = {};
          scenes.forEach(key => {
            new_layout[key] = {...x.layout[key], camera: {...d[camera]}};
          });
          Plotly.relayout(x, new_layout);
        }
      });
    }")
  return(p)
}

# Method 3: Simpler overlay with transparency
plot_surface_overlay <- function(test_data, test_result) {
  
  # Interpolate as before
  theta_true <- seq(-pi, pi, length.out = ncol(test_data$true_surface))
  theta_recon <- test_result$theta_grid
  
  true_surface_interp <- matrix(NA, nrow(test_data$true_surface), length(theta_recon))
  
  for (i in 1:nrow(test_data$true_surface)) {
    f_interp <- approx(theta_true, test_data$true_surface[i,], 
                      xout = theta_recon, rule = 2)
    true_surface_interp[i,] <- f_interp$y
  }
  
  theta_deg <- theta_recon * 180 / pi
  
  # Create overlay plot
  p <- plot_ly() %>%
    # True surface - semi-transparent
    add_trace(
      x = test_result$t_grid,
      y = theta_deg,
      z = t(true_surface_interp),
      type = 'surface',
      colorscale = list(c(0, 1), c("lightblue", "darkblue")),
      opacity = 0.7,
      name = 'True',
      showscale = FALSE
    ) %>%
    # Reconstructed surface - mesh
    add_trace(
      x = test_result$t_grid,
      y = theta_deg,
      z = t(test_result$kime_surface),
      type = 'surface',
      # intensity = as.vector(t(test_result$kime_surface)),
      colorscale = list(c(0, 1), c("yellow", "red")),
      opacity = 0.6,
      name = 'Reconstructed'
    ) %>%
    layout(
      title = "Surface Overlay: True (blue) vs Reconstructed (red)",
      scene = list(
        xaxis = list(title = "Time"),
        yaxis = list(title = "Phase (degrees)"),
        zaxis = list(title = "Signal")
      )
    )
  
  return(p)
}

# Run the complete test
test_data <- test_simple_surface()
## True surface has waves in PHASE direction (cos(theta))
## Check if reconstruction preserves this pattern
test_result <- kpt_algorithm(
  test_data$Y, 
  test_data$t_grid, 
  n_max = 3, 
  max_iter = 10,
  M_theta = 100  # This determines reconstruction resolution
)
## Starting KPT with N = 50 K = 100 
## Starting KPT iterations...
## Iteration 1 : Phase sampling...
## Iteration 1 : Moment estimation...
## Iteration 1: L2 distance = 39.553424
## Iteration 2 : Phase sampling...
## Iteration 2 : Moment estimation...
## Iteration 2: L2 distance = 21.209830
## Iteration 3 : Phase sampling...
## Iteration 3 : Moment estimation...
## Iteration 3: L2 distance = 12.829034
## Iteration 4 : Phase sampling...
## Iteration 4 : Moment estimation...
## Iteration 4: L2 distance = 16.012683
## Iteration 5 : Phase sampling...
## Iteration 5 : Moment estimation...
## Iteration 5: L2 distance = 20.228938
## Iteration 6 : Phase sampling...
## Iteration 6 : Moment estimation...
## Iteration 6: L2 distance = 13.668222
## Iteration 7 : Phase sampling...
## Iteration 7 : Moment estimation...
## Iteration 7: L2 distance = 15.740527
## Iteration 8 : Phase sampling...
## Iteration 8 : Moment estimation...
## Iteration 8: L2 distance = 19.410005
## Iteration 9 : Phase sampling...
## Iteration 9 : Moment estimation...
## Iteration 9: L2 distance = 14.791463
## Iteration 10 : Phase sampling...
## Iteration 10 : Moment estimation...
## Iteration 10: L2 distance = 10.532783
# Create all three visualization types
p_comparison <- plot_test_comparison(test_data, test_result)
p_difference <- plot_surface_difference(test_data, test_result)
p_overlay <- plot_surface_overlay(test_data, test_result)

# Display
p_comparison  # Side-by-side
p_difference  # Three panels with difference
p_overlay     # Overlay view
# Check pattern preservation
cat("\nPattern check:\n")
## 
## Pattern check:
cat("True surface: waves in PHASE direction (cos(theta))\n")
## True surface: waves in PHASE direction (cos(theta))
cat("Reconstructed: Check if waves are in same direction\n")
## Reconstructed: Check if waves are in same direction
# Quantitative check - slice at fixed time
t_check <- 50
slice_true <- test_data$true_surface[t_check,]
theta_true <- seq(-pi, pi, length.out = length(slice_true))

# Interpolate reconstruction to same theta points
slice_recon_interp <- approx(test_result$theta_grid, 
                            test_result$kime_surface[t_check,],
                            xout = theta_true)$y

# Plot comparison
plot(theta_true * 180/pi, slice_true, type = 'l', lwd = 2,
     xlab = "Phase (degrees)", ylab = "Signal",
     main = paste("Surface slice at t =", t_check))
lines(theta_true * 180/pi, slice_recon_interp, col = "red", lwd = 2, lty = 2)
legend("topright", c("True", "Reconstructed"), 
       col = c("black", "red"), lty = c(1, 2), lwd = 2)

# Check dominant direction of variation
check_pattern_direction <- function(surface) {
  # Compute variance along each dimension
  var_along_time <- mean(apply(surface, 2, var))   # Fix phase, vary time
  var_along_phase <- mean(apply(surface, 1, var))  # Fix time, vary phase
  
  cat("Variance along time:", var_along_time, "\n")
  cat("Variance along phase:", var_along_phase, "\n")
  cat("Dominant variation:", 
      ifelse(var_along_phase > var_along_time, "PHASE", "TIME"), "\n")
}

cat("\nTrue surface pattern:\n")
## 
## True surface pattern:
check_pattern_direction(test_data$true_surface)
## Variance along time: 11.26159 
## Variance along phase: 52 
## Dominant variation: PHASE
cat("\nReconstructed surface pattern:\n")
## 
## Reconstructed surface pattern:
check_pattern_direction(test_result$kime_surface)
## Variance along time: 126.2492 
## Variance along phase: 1073.744 
## Dominant variation: PHASE

14.1 Computational Complexity

  • Per Iteration: \(O(NKn_{\max} + K^2n_{\max})\)
    • Phase sampling: \(O(NK)\)
    • Moment computation: \(O(NKn_{\max})\)
    • Local regression: \(O(K^2n_{\max})\)
    • Fourier synthesis: \(O(KMn_{\max})\)
  • Total: \(O(L \cdot (NKn_{\max} + K^2n_{\max}))\) where \(L\) is number of iterations.

14.2 Memory Requirements

  • Data storage: \(O(NK)\)

  • Phase distributions: \(O(KM)\)

  • Harmonic coefficients: \(O(Kn_{\max})\).

14.3 Optimization Strategies

  1. Parallelization: Phase sampling and moment computation can be parallelized across subjects

  2. FFT Usage: Replace direct Fourier synthesis with FFT for large \(M\)

  3. Adaptive Harmonics: Start with small \(n_{\max}\), increase if needed

  4. Sparse Grids: Use adaptive time grids in regions of rapid change.

14.4 Algorithm Extensions and Modifications

14.4.1 Parametric Constraints

If domain knowledge suggests parametric forms (e.g., von Mises, Laplace, etc.), modify the Algorithm Step 2d:

```r
# Extract von Mises parameters from Fourier coefficients
extract_vonmises_params <- function(m1, m0) {
  mu <- Arg(m1)
  kappa <- abs(m1) / abs(m0)
  # Use numerical solver for exact kappa from A(kappa) = |m1|/|m0|
  list(mu = mu, kappa = kappa)
}
```

14.4.2 Regularization

Add smoothness penalties to prevent overfitting:

```r
# L2 regularization on phase distribution changes
regularization_term <- lambda * sum(diff(phi_new[k, ])^2)
```

14.4.3 Bootstrap Confidence Intervals

Estimate uncertainty through bootstrap resampling:

```r
bootstrap_kpt <- function(Y, t_grid, B = 100) {
  # Resample subjects with replacement
  # Run KPT on each bootstrap sample
  # Compute confidence intervals from bootstrap distribution
}
```

14.4.4 Handling Edge Cases

Sparse Data Regions:

```r
# Adaptive bandwidth for regions with sparse data
adaptive_bandwidth <- function(t_grid, data_density) {
  h_base <- dpill(t_grid, rep(1, length(t_grid)))
  h_adaptive <- h_base * sqrt(median(data_density) / data_density)
  return(h_adaptive)
}
```

Numerical Stability:

```r
# Stabilized ratio computation with regularization
stabilized_ratio <- function(numerator, denominator, epsilon = 1e-10) {
  denominator_reg <- denominator + epsilon * max(abs(denominator))
  return(numerator / denominator_reg)
}
```

14.4.5 Diagnostic Tools

Phase Distribution Quality Metrics:

```r
assess_phase_distribution <- function(phi_estimated, theta_grid) {
  # Entropy
  entropy <- -sum(phi_estimated * log(phi_estimated + 1e-10)) * 
             (theta_grid[2] - theta_grid[1])
  
  # Concentration (inverse of circular variance)
  theta_complex <- exp(1i * theta_grid)
  mean_direction <- sum(phi_estimated * theta_complex) * 
                   (theta_grid[2] - theta_grid[1])
  concentration <- abs(mean_direction)
  
  # Multimodality test
  n_modes <- sum(diff(sign(diff(phi_estimated))) < 0)
  
  list(entropy = entropy, 
       concentration = concentration,
       n_modes = n_modes)
}
```

Residual Analysis:

```r
analyze_residuals <- function(Y, kpt_result, Theta_sampled) {
  residuals <- matrix(NA, nrow(Y), ncol(Y))
  
  for (j in 1:nrow(Y)) {
    for (k in 1:ncol(Y)) {
      # Interpolate surface value at sampled phase
      theta_idx <- which.min(abs(kpt_result$theta_grid - Theta_sampled[j,k]))
      predicted <- kpt_result$kime_surface[k, theta_idx]
      residuals[j,k] <- Y[j,k] - predicted
    }
  }
  
  # Compute diagnostics
  list(
    mean_residual = mean(residuals),
    sd_residual = sd(residuals),
    autocorrelation = acf(as.vector(residuals), plot = FALSE)$acf
  )
}
```

14.4.6 Data Preprocessing Pipeline

```r
preprocess_fmri <- function(raw_data, TR = 2.0) {
  # raw_data: 4D array (x, y, z, time)
  # TR: repetition time in seconds
  
  # Extract time series from ROI
  roi_timeseries <- extract_roi_timeseries(raw_data, roi_mask)
  
  # Detrending
  detrended <- apply(roi_timeseries, 1, function(ts) {
    time_points <- 1:length(ts)
    lm_fit <- lm(ts ~ poly(time_points, 2))
    ts - predict(lm_fit)
  })
  
  # Bandpass filtering (0.01 - 0.1 Hz typical for fMRI)
  filtered <- apply(detrended, 2, function(ts) {
    butter_filter <- butter(3, c(0.01, 0.1) * 2 * TR, type = "pass")
    filtfilt(butter_filter, ts)
  })
  
  # Standardization
  standardized <- scale(filtered)
  
  return(t(standardized))
}
```

14.4.7 ROI-based KPT Analysis

```r
roi_kpt_analysis <- function(fmri_4d, roi_mask, parcellation) {
  # Extract time series for each ROI
  n_rois <- length(unique(parcellation[parcellation > 0]))
  roi_results <- list()
  
  for (roi_id in 1:n_rois) {
    cat(sprintf("Processing ROI %d/%d\n", roi_id, n_rois))
    
    # Extract ROI time series
    roi_voxels <- which(parcellation == roi_id, arr.ind = TRUE)
    roi_data <- extract_voxel_timeseries(fmri_4d, roi_voxels)
    
    # Run KPT
    roi_results[[roi_id]] <- kpt_algorithm(
      Y = roi_data,
      t_grid = 1:dim(fmri_4d)[4],
      n_max = 8,
      max_iter = 30
    )
  }
  
  return(roi_results)
}
```
````markdown

### Simulation-Based Validation

````markdown
```r
validation_study <- function(n_sims = 100, param_grid) {
  results <- expand.grid(param_grid)
  results$error <- NA
  results$convergence_iter <- NA
  
  for (i in 1:nrow(results)) {
    errors <- numeric(n_sims)
    conv_iters <- numeric(n_sims)
    
    for (sim in 1:n_sims) {
      # Generate data with specified parameters
      sim_data <- simulate_fmri_data(
        N = results$N[i],
        K = results$K[i],
        sigma_noise = results$sigma[i]
      )
      
      # Run KPT
      kpt_res <- kpt_algorithm(
        Y = sim_data$Y,
        t_grid = sim_data$t_grid,
        n_max = results$n_max[i]
      )
      
      # Compute error
      errors[sim] <- compute_reconstruction_error(
        kpt_res, kime_surface_true
      )$mean_error
      
      conv_iters[sim] <- length(kpt_res$convergence_history)
    }
    
    results$error[i] <- mean(errors)
    results$convergence_iter[i] <- mean(conv_iters)
  }
  
  return(results)
}

# Run validation
param_grid <- list(
  N = c(20, 50, 100),
  K = c(100, 200, 400),
  sigma = c(1, 2, 5),
  n_max = c(5, 10, 15)
)

validation_results <- validation_study(n_sims = 50, param_grid)
```

14.4.8 Robustness Analysis

```r
robustness_analysis <- function(base_data, perturbation_levels) {
  results <- list()
  
  for (p_level in perturbation_levels) {
    # Add perturbations
    perturbed_Y <- base_data$Y + 
      rnorm(length(base_data$Y), 0, p_level * sd(base_data$Y))
    
    # Run KPT
    kpt_res <- kpt_algorithm(perturbed_Y, base_data$t_grid)
    
    # Store results
    results[[as.character(p_level)]] <- kpt_res
  }
  
  # Compare phase distributions
  compare_phase_distributions(results)
}
```

14.4.9 Parallel Implementation

```r
library(parallel)
library(foreach)
library(doParallel)

kpt_algorithm_parallel <- function(Y, t_grid, n_max = 10, 
                                  max_iter = 50, n_cores = NULL) {
  
  if (is.null(n_cores)) {
    n_cores <- detectCores() - 1
  }
  
  cl <- makeCluster(n_cores)
  registerDoParallel(cl)
  
  # Parallel phase sampling
  sample_phases_parallel <- function(phi_current, N, K) {
    Theta_sampled <- foreach(k = 1:K, .combine = cbind) %dopar% {
      theta_k <- numeric(N)
      for (j in 1:N) {
        theta_k[j] <- sample_phase(phi_current[k, ])
      }
      theta_k
    }
    return(Theta_sampled)
  }
  
  # Main algorithm loop with parallelization
  # ... (implement parallel versions of key steps)
  
  stopCluster(cl)
}
```
````markdown

### GPU Acceleration (using `gpuR`)

````markdown
```r
library(gpuR)

kpt_algorithm_gpu <- function(Y, t_grid, n_max = 10) {
  # Transfer data to GPU
  Y_gpu <- gpuMatrix(Y, type = "float")
  
  # GPU-accelerated Fourier transforms
  compute_moments_gpu <- function(Y_gpu, Theta_gpu, n_max) {
    # Implement GPU kernel for moment computation
    # ... 
  }
  
  # GPU-accelerated matrix operations
  # ...
}
```
````markdown

### Integration with fMRItools

````markdown
```r
library(fmri)
library(oro.nifti)

kpt_fmri_wrapper <- function(nifti_file, mask_file = NULL) {
  # Load fMRI data
  fmri_data <- readNIfTI(nifti_file)
  
  if (!is.null(mask_file)) {
    mask <- readNIfTI(mask_file)
  } else {
    mask <- fmri_data[,,,1] > quantile(fmri_data[,,,1], 0.1)
  }
  
  # Extract masked time series
  time_series <- extract_masked_timeseries(fmri_data, mask)
  
  # Run KPT
  kpt_results <- kpt_algorithm(
    Y = time_series,
    t_grid = 1:dim(fmri_data)[4]
  )
  
  # Reconstruct 4D results
  kime_surface_4d <- reconstruct_4d(kpt_results, mask, dim(fmri_data))
  
  # Save results
  writeNIfTI(kime_surface_4d, filename = "kime_surface_4d")
  
  return(kpt_results)
}
```

14.6 Potential Problems and Solutions

Issue Symptom Solution
Undersmoothing Noisy phase distributions Increase bandwidth or regularization
Oversmoothing Loss of multimodal structure Decrease bandwidth, increase n_max
Slow convergence >50 iterations needed Better initialization, adaptive step sizes
Numerical instability NaN or Inf values Add regularization, check data scaling

15 Applications

The TCIU Appendix includes implementation of a simulated fMRI time-series and the corresponding the Kime-Phase Tomography (KPT) fMRI Time-Series Simulation showing the mapping of repeated fMRI time-series measurements to 2D kime-surfaces. This simulation specifically tests the KPT algorithm on synthetic fMRI data with ON (stimulus) and OFF (rest) conditions.

16 Appendix A: Geometry of the kime‑surface & Fisher Information

To study the geometry of the kime‑surface, let’s start the smooth kimesurface embedding \[F:(0,T]\times(-\pi,\pi]\;\longrightarrow\;\mathbb R^{3}, \qquad F(t,\theta)=\bigl(x(t,\theta),y(t,\theta),Z(t,\theta)\bigr) =(t\cos\theta,\;t\sin\theta,\;Z(t,\theta)),\] with \(Z\in C^{2}((0,T]\times(-\pi,\pi])\) and express the partial derivatives as \(Z_t=\partial_tZ,\;Z_\theta=\partial_\theta Z,\) etc.

The first fundamental form is

\[\begin{aligned} E&=\langle X_t,X_t\rangle =1+Z_t^{2},\\[2pt] F&=\langle X_t,X_\theta\rangle =Z_tZ_\theta,\\[2pt] G&=\langle X_\theta,X_\theta\rangle =t^{2}+Z_\theta^{2}. \end{aligned}\] and the Normal vector is \[W=X_t\times X_\theta =(\sin\theta\,Z_\theta-t\cos\theta\,Z_t,\; -\cos\theta\,Z_\theta-t\sin\theta\,Z_t,\; t),\qquad \|W\|^{2}=t^{2}\bigl(1+Z_t^{2}\bigr)+Z_\theta^{2}.\]

Let’s define the squared normal magnitude,

\[\Delta :=\boxed{\;\|W\|^{2}=EG-F^{2} \;} \tag{A.1}\]

Then the second fundamental form is

\[\begin{aligned} e_{\text{num}} &= t\,Z_{tt},\\ f_{\text{num}} &= t\,Z_{t\theta}-Z_\theta,\\ g_{\text{num}} &= t\bigl(tZ_t+Z_{\theta\theta}\bigr). \end{aligned} \qquad e=\frac{e_{\text{num}}}{\|W\|},\; f=\frac{f_{\text{num}}}{\|W\|},\; g=\frac{g_{\text{num}}}{\|W\|}.\]

Using \(K=(eg-f^{2})/(EG-F^{2})\) and equation (A.1), the Gaussian curvature is

\[\boxed{\; \underbrace{K}_{Gaussian}=\frac{e_{\text{num}}g_{\text{num}}-f_{\text{num}}^{2}} {\Delta^{2}} \;=\; \frac{\displaystyle t^{2}Z_{tt}\bigl(tZ_t+Z_{\theta\theta}\bigr)\;-\; \bigl(tZ_{t\theta}-Z_\theta\bigr)^{2}} {\bigl[t^{2}\bigl(1+Z_t^{2}\bigr)+Z_\theta^{2}\bigr]^{2}} } \tag{A.2}\]

Note that the overall sign of the numerator is positive, not negative.

Also, the mean curvature

\[\boxed{\; \underbrace{H}_{mean}=\frac{1}{2}\frac{Eg+Ge-2Ff}{EG-F^{2}} =\frac{(1+Z_t^{2})\,g_{\text{num}}+(t^{2}+Z_\theta^{2})\,e_{\text{num}} -2Z_tZ_\theta\,f_{\text{num}}} {2\,\Delta^{3/2}} } \tag{A.3}\]

may be expanded as

\[H=\frac{(1+Z_t^{2})\,t\bigl(tZ_t+Z_{\theta\theta}\bigr) +(t^{2}+Z_\theta^{2})\,tZ_{tt} -2Z_tZ_\theta\bigl(tZ_{t\theta}-Z_\theta\bigr)} {2\,\bigl[t^{2}(1+Z_t^{2})+Z_\theta^{2}\bigr]^{3/2}}.\]

Finally, the principal curvatures are

\[\underbrace{k_{1,2}}_{principal}=H\;\pm\;\sqrt{H^{2}-K}.\]

With respect to the kime coordinate basis \((X_t,X_\theta)\), the shape operator is

\[\boxed{\; S=I^{-1}II =\frac{1}{\Delta} \begin{pmatrix} G & -F\\ -F & E \end{pmatrix} \begin{pmatrix} e_{\text{num}} & f_{\text{num}}\\ f_{\text{num}} & g_{\text{num}} \end{pmatrix} \frac{1}{\|W\|} }\tag{A.4} \]

Jointly, equations (A.2) - (A.4) give a complete and full account of the intrinsic and extrinsic geometry of kime‑surfaces. All classical surface invariants (Christoffel symbols, geodesics, Laplace–Beltrami operator, etc.) can now be written explicitly from the scalar coefficients \(E,F,G\) and their \(t,\theta\) derivatives.

Within the KPT framework, there is a connection between the geometry of the kime-surface and the information content of the phase distribution. The relationship is formalized through the concepts of information geometry, where the Fisher information itself defines a metric on a manifold of probability distributions. The geometric quantities of the kime-surface relate to the Fisher information of the phase distribution \(\Phi_t\).

First, consider the family of possible phase distributions \(\varphi_t(\theta)\) at a fixed time \(t\). If this family of distributions is parameterized by a set of parameters \(\boldsymbol{\xi} = (\xi_1, \xi_2, \dots, \xi_d)\), then these parameters define the coordinates of a statistical manifold. Each point on this manifold corresponds to a specific probability distribution.

The natural way to measure distances and define geometry on this manifold is through the Fisher information metric. For a given parameterization \(\boldsymbol{\xi}\), the components of the Fisher information metric tensor, \(I(\boldsymbol{\xi})\), are given by

\[I_{ij}(\boldsymbol{\xi}) = \mathbb{E}\left[\left(\frac{\partial \log \varphi_t(\theta; \boldsymbol{\xi})}{\partial \xi_i}\right)\left(\frac{\partial \log \varphi_t(\theta; \boldsymbol{\xi})}{\partial \xi_j}\right)\right].\]

This metric measures the distinguishability of distributions. A large distance between two points on the manifold implies that the corresponding probability distributions are easy to distinguish based on sampled data. The Fisher information metric is, in fact, the Hessian of the Kullback-Leibler divergence, which reinforces its role as a measure of statistical difference.

The link between the reconstructed kime-surface and the Fisher information of the underlying phase distribution arises from how the parameters of \(\varphi_t\) influence the shape of the surface.

The First Fundamental Form with (upper-case) coefficients \(E\), \(F\), and \(G\), defines the intrinsic geometry of the kime-surface itself. It measures distances and angles on the surface as embedded in \(\mathbb{R}^3\). The coefficients are functions of the partial derivatives of the surface’s position vector, which in turn depend on the harmonic coefficients \(f_n(t)\) and the phase \(\theta\).

The Second Fundamental Form with (lower-case) coefficients \(e\), \(f\), and \(g\), describes the extrinsic curvature of the surface. In other words, it reflects how the kime-surface bends in the surrounding \(\mathbb{R}^3\) space. These coefficients depend on the second partial derivatives of the surface.

The crucial connection is that changes in the parameters \(\boldsymbol{\xi}\) of the phase distribution \(\varphi_t(\theta; \boldsymbol{\xi})\) cause changes in the harmonic coefficients \(f_n(t)\), which in turn alter the geometry of the kime-surface. This means that the metric and curvature of the kime-surface are functions of the parameters of the phase distribution. Hence, the geometry of the kime-surface provides a proxy for the geometry of the underlying statistical manifold of phase distributions.

The Gaussian curvature \(K = (eg - f^2) / (EG - F^2)\), is an intrinsic property of the surface. In the context of information geometry, the scalar curvature of a statistical manifold (which can be derived from the Fisher information metric) can be used to classify phase transitions and characterize the complexity of the statistical model. In KPT, a high Gaussian curvature on the kime-surface at a particular point \((t, \theta)\) would indicate a region where the surface is highly curved. This high curvature is a direct result of rapid changes in the underlying phase distribution with respect to its parameters. Therefore, regions of high Gaussian curvature on the kime-surface correspond to regions of high information content in the phase distribution.

While the mean curvature \(H = \frac{1}{2}\frac{Eg + Ge - 2Ff}{EG-F^{2}}\), is an extrinsic property related to how the surface is embedded in space. It reflects how the surface is bending on average. In the context of KPT, changes in the mean curvature can be linked to how the moments of the phase distribution are changing. This provides another way to probe the information content of the phase distribution, as the moments are directly related to the shape of the distribution.

The geometric properties of the kime-surface, particularly its curvature, provide a visual and quantitative representation of the Fisher information landscape of the underlying, unobservable phase distribution. By analyzing the geometry of the reconstructed surface, one can infer where the phase distribution is most sensitive to changes in its parameters, and thus where the most information about the underlying process is encoded. This turns the KPT reconstruction into a powerful tool for visual data analysis, allowing researchers to literally “see” the information content of their longitudinal data.

SOCR Resource Visitor number Web Analytics SOCR Email