SOCR ≫ | TCIU Website ≫ | TCIU GitHub ≫ |
Consider complex-time (kime), \(\kappa = t \cdot e^{i\theta}\), as a polar-coordinate parameterization of the the 2D domain, \(\mathbb{R}^2\), of the 2D manifold. Both of the domain parameters time (t) and the phase (\(\theta\)) are random variables, not constants of controllable parameters. For instance, \(t \sim Uniform[0, \infty)\) is a time sampling distribution, whereas \(\theta \sim \Phi(t)\) is some circular (periodic) phase sampling distribution, such as Laplace, supported on the unitary circle \(S^1\), i.e., \(\theta \in [-\pi, \pi)\). In this TCIU Appendix we explore the idea of kime-phase tomography (KPT) to recover the unobservable kime-phase, represent longitudinal processes, measured as repeated-measurement time-series, as 2D manifold kime-surfaces parameterized over \(\kappa\in\mathbb{C}\).
As a motivation, consider the specific example of having repeated time-series measurements of a controlled longitudinal process, \(\{s_1(t), s_2(t), \cdots, s_N(t)\}\). For a fixed repeated-sample index, \(j\), the observed intensities, \(s_j(t)\) will reflect the kime-surface amplitudes at radial location \(t\). If \(t \sim Uniform\) distribution, and \(\theta \sim \Phi (t)\) is a circular (periodic) phase distribution supported on the unitary circle \(S^1\), kime-phase tomography maps the longitudinal process, tracked as the entire collection of repeated time-series to a manifold, \(Map: Process \{s_j(t)\} \to \mathcal{M}\).
First we will explore the fundamentals of mathematically representing a 2D manifold parameterized by two distributions — time \(t\) (radial) and phase \(\theta\) (angular). Let’s consider the repeated time-series measurements \(\{s_j(t)\}_{j=1}^N\). The goal is to map the underlying process to a surface embedded in \(\mathbb{R}^3\). The complex-time (kime) parameterization \(\kappa = t e^{i\theta}\) defines the kime domain, where \(t \sim \text{Uniform}[0, \infty)\) and \(\theta \sim \Phi(t)\) (a circular Laplace distribution on \(S^1 \equiv [-\pi, \pi)\)). The observed intensity \(s_j(t)\) will impact the manifold amplitude at \(\kappa\).
Kime Domain: \(\mathcal{K} = \mathbb{C} \cong \mathbb{R}^2\), with kime points \(\kappa = t e^{i\theta}\).
Time Distribution: \(t \sim \mathcal{U}[0, \infty)\), with density \(p_t(t') = \lim_{T \to \infty} \frac{\mathbf{1}_{[0,T]}(t')}{T}\).
Phase Distribution: \(\theta \sim \Phi(t)\), a circular Laplace distribution conditioned on \(t\) \[p_{\theta|t}(\theta' \mid t) = \frac{\beta(t)}{2 \sinh \beta(t)} e^{\beta(t) \cos(\theta' - \mu(t))}, \quad \theta' \in [-\pi, \pi),\] where \(\mu(t) \in [-\pi, \pi)\) is the (Laplase distribution) location parameter and \(\beta(t) > 0\) is the scale parameter. This is a normalized periodic density (wrapped Laplace).
The observed data contains the entire time-series collection with all \(N\) realizations of the longitudinal process, observed at discrete times. For each \(j \in \{1, \dots, N\}\), the time points are \(\mathbf{t}_j = \{t_{j,k}\}_{k=1}^{M_j} \subset [0, \infty)\), ant the corresponding process intensities are \(s_j(t_{j,k}) \in \mathbb{R}\). Thus, the complete dataset is \(\mathcal{D} = \{(j, t_{j,k}, s_j(t_{j,k})) \mid 1 \leq j \leq N, 1 \leq k \leq M_j\}.\)
The time-series to kime-surfaces mapping \(\mathcal{M}: \mathcal{D} \to \mathbb{R}^3\) may be defined in two steps. First, define the kime sampling; for each \((j, t_{j,k}) \in \mathcal{D}\), sample \(\theta_{j,k} \sim \Phi(t_{j,k}).\) Second, define the embedding for each point \[\mathbf{p}_{j,k} = \left( t_{j,k} \cos \theta_{j,k},\ t_{j,k} \sin \theta_{j,k},\ s_j(t_{j,k}) \right) \in \mathcal{M}\subset \mathbb{R}^3.\]
The discrete kime-surface is the set \[\mathcal{M}_{\text{discrete}} = \left\{ \mathbf{p}_{j,k} \mid (j, t_{j,k}) \in \mathcal{D} \right\}.\]
Whereas the more general continuous kime-surface requires kernel regression to estimate the intensity function \(f: \mathbb{R}^2 \to \mathbb{R}\) \[f(x, y) = \frac{\sum_{(j,k) \in \mathcal{D}} s_j(t_{j,k}) \cdot K\left( \frac{\| (x,y) - (x_{j,k}, y_{j,k}) \|}{h} \right)}{\sum_{(j,k) \in \mathcal{D}} K\left( \frac{\| (x,y) - (x_{j,k}, y_{j,k}) \|}{h} \right)},\] where \((x_{j,k}, y_{j,k}) = (t_{j,k} \cos \theta_{j,k}, t_{j,k} \sin \theta_{j,k})\), \(K(u) = e^{-u^2}\) (Gaussian kernel), \(h > 0\) is the bandwidth. The continuous kime-surface may be displayed as \[\mathcal{M}_{\text{cont}} = \left\{ (x, y, f(x,y)) \mid (x,y) \in \mathbb{R}^2 \right\}.\]
First we will explore the parameterization of \(\mathcal{M}_{\text{cont}}\). The surface \(\mathcal{M}_{\text{cont}}\) is parameterized by the radial and angular distributions. Let \(\Omega = [0, \infty) \times [-\pi, \pi)\) with the product measure \(d\mu = p_t(t) p_{\theta|t}(\theta \mid t) dt d\theta\). The expected intensity at \(\kappa = t e^{i\theta}\) is \(z(t, \theta) = \mathbb{E}_j[s_j(t)],\) assuming \(s_j(t)\) are i.i.d. realizations of a stochastic process \(s(t)\). Then, the surface is \[\mathcal{M}_{\text{cont}} = \left\{ (t \cos\theta, t \sin\theta, z(t, \theta)) \mid (t, \theta) \in \Omega \right\}.\]
Latent phase path may be defined in different ways, but here is one example. For each replicate \(j\), introduce a latent random trajectory \[\Theta_j:(0,T]\longrightarrow S^{1},\qquad \Theta_j(t)\underset{\text{over t}}{\overset{\text{i.i.d.}}{\sim}}\Phi_t,\] Then, the conditional process is an intrinsic intensity surface represented as a random field \[S_j(t,\theta)=\bigl[\text{mechanistic model}\bigr]+\varepsilon_j(t,\theta),\] where the \(\varepsilon_j\)’s are small-scale fluctuations.
Each of the observed time series, recorded time-courses, represents only slices along the latent phase: \[s_j(t):=S_j\!\bigl(t,\Theta_j(t)\bigr),\qquad t=t_{j,k}\ \text{in practice}.\]
In this framework, \[\underbrace{s_j(t)}_{\text{measured}} \;=\; \underbrace{S_j(t,\theta)}_{\text{latent surface}} \Big|_{\theta=\Theta_j(t)}.\] Hence \(s_j(t,\theta)\) and \(s_j(t)\) are not two different process definitions. Rather, \(s_j(t)\) is the push‑forward of \(s_j(t,\theta)\) by the random map \(\theta=\Theta_j(t)\).
The table below decomposes the sources of stochasticity in this representation.
Symbol | Randomness from |
---|---|
\(\Theta_j(t)\) | phase variability (circular law \(\Phi_t\)) |
\(S_j(t,\theta)\) | surface variability (between replicates) |
\(\varepsilon_{j,k}\) | measurement noise (optional) |
Since \(\Theta_j(t)\) and \(S_j(t,\theta)\) are independent by construction, the moment relationships
\[\mathbb E\bigl[s_j(t)\bigr] =\int_{-\pi}^{\pi}\mathbb E\bigl[S_j(t,\theta)\bigr]\, \varphi_t(\theta)\,\frac{d\theta}{2\pi} =:Z(t)\] the quantity embedded as the kime-surface height function. If \(\Theta_j(t)\) has temporal dependence, e.g., a wrapperd Ornstein-Uhlenbeck process (OU process), where the state variable is defined on a compact space, like the unit circle \(S^1\) or an interval with periodic boundary. The same formula holds with \(\varphi_t\) replaced by the marginal law at time \(t\).
Let \[\Omega:=(0,\infty)\times S^{1},\qquad d\mu(t,\theta)=p_t(t)\,\varphi_t(\theta)\, dt\,d\theta/2\pi\] and define the conditional mean surface \[z(t,\theta):=\mathbb E\bigl[S_1(t,\theta)\bigr].\] The kime‑surface is the graph \[\mathcal M_{\rm cont}=\Bigl\{\bigl(t\cos\theta,\;t\sin\theta,\;z(t,\theta)\bigr)\;:\; (t,\theta)\in\Omega\Bigr\}.\]
The actually observed time‑series \(s_j(t)\) are 1D stochastic slices
of this surface along the random phase path \(\theta=\Theta_j(t)\). This interpretation
is consistent, as long as the hierarchy \(S_j(t,\theta)\;\to\;s_j(t)=S_j\bigl(t,\Theta_j(t)\bigr)\)
is explicit.
All inference in KPT is aimed at inverting this hierarchy. That
is using the collection \(\{s_j(t_k)\}\) to recover both \(z(t,\theta)\) (kimesurface
geometry) and \(\varphi_t(\theta)\) (phase distirbution
law). In a statistical-physics sense, \(z(t,\theta)\) is the conditional
mean, whereas \(Z(t)=\mathbb
E[s_j(t)]\) is the marginal mean after integrating over
the latent kime-phase.
Theorem 1 (Consistency of Kernel Estimator): Let \(f(x,y)\) be the kernel estimator of \(z(t, \theta)\) at \((x,y) = (t \cos\theta, t \sin\theta)\). Assume that
The process \(s_j(t)\) is ergodic with \(\mathbb{E}[|s_j(t)|^2] < \infty\) for all \(t\).
Time points are i.i.d. from \(\mathcal{U}[0,T]\), and \(T \to \infty\).
Bandwidth \(h = h(M)\) satisfies \(\lim_{M \to \infty} h = 0\) and \(\lim_{M \to \infty} M h^2 = \infty\) (where \(M = |\mathcal{D}|\)).
Then, for all \((t, \theta) \in \Omega\), \(f(x, y) \xrightarrow{a.s.} z(t, \theta) \quad \text{as} \quad M \to \infty.\)
Proof: Define the regression function \(m(\mathbf{u}) = \mathbb{E}[s \mid \mathbf{u}]\) for \(\mathbf{u} = (x,y)\). By the Strong Law of Large Numbers \[\frac{\sum_{i=1}^M s_i K\left( \frac{\|\mathbf{u} - \mathbf{u}_i\|}{h} \right)}{\sum_{i=1}^M K\left( \frac{\|\mathbf{u} - \mathbf{u}_i\|}{h} \right)} \xrightarrow{a.s.} \frac{\mathbb{E}[s \cdot K(\|\mathbf{u} - \mathbf{u}'\| / h)]}{\mathbb{E}[K(\|\mathbf{u} - \mathbf{u}'\| / h)]}.\]
As \(h \to 0\), the right-hand side converges to \(\mathbb{E}[s \mid \mathbf{u}] = z(t, \theta)\) (by properties of Gaussian kernels). The conditions on \(h\) ensure consistency.
Example (Circular Laplace Distribution): The wrapped Laplace density is \[p_{\theta|t}(\theta' \mid t) = \sum_{k=-\infty}^{\infty} \frac{1}{2b(t)} e^{-\frac{|\theta' - \mu(t) + 2\pi k|}{b(t)}}.\] A simple sampling algorithm requires drawing \(\eta \sim \text{Laplace}(\mu(t), b(t))\) (linear Laplace) and computing \(\theta_{j,k} = \eta \mod 2\pi\) (wrapped to \([-\pi, \pi)\)). the discrete kimesurface (\(\mathcal{M}_{\text{discrete}}\)) can be rendered as a 3D scatter plot of \(\mathbf{p}_{j,k}\), where points spread radially; for fixed \(t\), they lie on a circle of radius \(t\) with heights \(s_j(t)\). While the continuous kimesurface (\(\mathcal{M}_{\text{cont}}\)) can be displayed by sampling \(\{\theta_{j,k}\}\) for all \((j,k) \in \mathcal{D}\), computing \((x_{j,k}, y_{j,k})\), and evaluating \(f(x,y)\) on a grid (e.g., Cartesian or polar). It will show as a smooth surface, radial if \(z(t,\theta)\) is independent of \(\theta\), e.g., if \(s_j(t)\) has no phase dependence.
Observe the intrinsic phase dependences. When \(s_j(t)\) depends on \(\theta\), e.g., phase-sensitive processes, \(z(t,\theta)\) varies angularly, breaking radial symmetry. It’s more efficient to utilize a polar grid for efficient computation and visualization. Also, the bandwidth hyperparameter \(h\) can be optimized via cross-validation. Indeed, we can replace Laplace with any circular distribution, such as von Mises.
Kime-phase tomography is inspired by quantum mechanics (QM) and requires translation of concepts from quantum tomography where state reconstruction is via observable measurements, operator commutators, and distribution theory into the context of kime manifolds. The goal is to reconstruct the unobservable phase distribution \(\Phi(t)\) from repeated time-series measurements \(\{s_j(t)\}\).
On the Hilbert space \(\mathcal K \;=\;L^{2}\!\bigl((0,\infty)\times S^{1},dt\,d\theta/2\pi\bigr)\), we can define operators acting on state functions as follows
symbol | action on \(\psi(t,\theta)\) | domain (core) |
---|---|---|
\(\widehat T\) | \((\widehat T\psi)(t,\theta)=t\,\psi(t,\theta)\) | \(C_{c}^{\infty}\) |
\(\widehat\Theta\) | \((\widehat\Theta\psi)(t,\theta)=\theta\,\psi(t,\theta)\) | \(C_{c}^{\infty}\) |
\(\widehat P_{\phi}\) | \((\widehat P_{\phi}\psi)(t,\theta)=e^{i\phi\widehat\Theta \psi(t,\theta)}\) | everywhere (unitary) |
\(\widehat P_{\phi}\) (alternative) | \((\widehat P_{\phi}\psi)(t,\theta)=e^{i\phi}\psi(t,\theta)\) | everywhere (unitary) |
Note that since both \(\widehat T\) and \(\widehat\Theta\) are multiplication operators, they commute, i.e., their commutator is trivial, \([\widehat T,\widehat\Theta]=0\). Non‑trivial commutators arise only if at least one operator involves a derivative, i.e., a generator of a one‑parameter unitary group. Two classical choices to obtain non-commuting operators are listed below.
Canonical pair | Commutator | Remarks |
---|---|---|
\(\bigl(\widehat\Theta,\ \widehat L_\theta:=-i\hbar_\kappa\,\partial_\theta\bigr)\) | \([\widehat\Theta,\widehat L_\theta]=i\hbar_\kappa\,\mathbf 1\) | angular “position/momentum” |
\(\bigl(\widehat T,\ \widehat\Omega:=-i\hbar_\kappa\,\partial_t\bigr)\) | \([\widehat T,\widehat\Omega]=i\hbar_\kappa\,\mathbf 1\) | radial analogue |
In functional analysis and quantum mechanics, Canonical Commutation Relation (CCR) operators are related to the algebraic structure that describes the commutation relations between position and momentum operators in quantum mechanics. The CCR algebra over a symplectic vector space is a \(C^*\)-algebra generated by elements satisfying the Weyl form of the canonical commutation relations. These relations imply that the elements are unitary and have specific commutation properties.
Schwartz operators represent a non-commutative analog of Schwartz functions. The space of Schwartz operators a Fréchet space equipped with a set of seminorms. The Schwartz core, or Schwartz kernel theorem states that continuous linear operators between Schwartz spaces (or between spaces of test functions and distributions) can be represented by a kernel that is a distribution on the product space. The Schwartz kernel theorem provides a framework for describing continuous linear operators using distributions.
For example, to introduce kime cross‑non‑commutativity between \(t\) and \(\theta\) may require Moyal‑type plane, \[[\widehat T,\widehat\Theta]=i\ell^{2}\mathbf 1 \qquad(\ell>0\;\text{a deformation length scale}),\] in which case, \(\widehat T,\widehat\Theta\) can’t both be multiplication operators, i.e., at least one operator must be represented by a derivative plus a linear term so that the CCR holds.
One consistent realization of kime cross‑non‑commutativity involves the following pair of operators \[\widehat T = t,\qquad \widehat\Theta = \theta + i\ell^{2}\partial_t,\] defined on the Schwartz core. All other commutators, e.g., \([\widehat T,\widehat P_\phi]\), are then automatically determined.
In practice, we select a desired canonical commutation relations and then pick an operator representation that satisfies these relations.
\[\boxed{ \begin{align} {\text{For instance, }} [\widehat T,\widehat P_\phi]=i\hbar_\kappa\widehat P_\phi\partial_\theta {\text{ is not consistent with }} \widehat P_\phi=e^{i\phi}\mathbf 1. \\ {\text{ To ensure that this commutator }} [\widehat T,\widehat P_\phi]=i\hbar_\kappa\widehat P_\phi\partial_\theta {\text{ requires defining }} \\ \widehat P_\phi:=e^{i\phi\widehat\Theta} {\text{and letting }} \widehat\Theta = \theta + i\ell^{2}\partial_t {\text{ carry a derivative component.}} \end{align} }\]
Definition 1 (Kime-Space): The kime domain \(\mathcal{K} \cong \mathbb{R}^2\) is parameterized by \(\kappa = te^{i\theta}\), with \(t \sim \mathcal{U}[0, \infty)\), radial longitudinal coordinate, and \(\theta \sim \Phi(t)\), angular coordinate, \(\theta \in S^1 \equiv [-\pi, \pi)\).
Definition 2: (Operator Algebra): Define non-commutative operators on \(\mathcal{K}\):
Radial Operator (time multiplication): \(\hat{T} \psi(t, \theta) = t \cdot \psi(t, \theta)\)
Angular Operator (phase multiplication): \(\hat{\Theta} \psi(t, \theta) = \theta \cdot \psi(t, \theta)\)
Phase Shift Operator: \(\hat{P}_\phi \psi(t, \theta) = e^{i\phi} \psi(t, \theta)\)
The commutator \([\hat{T},
\hat{\Theta}]\) encodes uncertainty
\[[\hat{T}, \hat{\Theta}] \psi = (t\theta -
\theta t)\psi = 0, \quad \text{(commutative baseline)}.\] To
introduce quantum-like non-commutativity, define a
kime-deformation \[[\hat{T},
\hat{P}_\phi] = i\hbar_\kappa \hat{P}_\phi \frac{\partial}{\partial
\theta}, \quad \hbar_\kappa \text{ (kime "Planck
constant")}.\]
This commutator may need to be replaced to reflect the specific canonical operator pair used, e.g., \([\widehat\Theta,\,-i\hbar_\kappa\partial_\theta]=i\hbar_\kappa\).
Theorem 2: (Kime-Uncertainty Principle): For any state \(\psi(t, \theta)\) with \(\|\psi\|=1\),
\[\Delta t \cdot \Delta \theta \geq
\frac{\hbar_\kappa}{2} \left| \left\langle \frac{\partial \psi}{\partial
\theta} \right\rangle \right| .\] Proof: Apply
Cauchy-Schwarz to \((\Delta T)^2 (\Delta
\Theta)^2 \geq \frac{1}{4} |\langle [\hat{T}, \hat{\Theta}]
\rangle|^2\) with deformed commutator.
Definition 5 (Kime-Observables): An observable is a self-adjoint operator \(\hat{O}\) on \(\mathcal{H}_\kappa = L^2(\mathbb{R}^+ \times S^1, dt\, d\theta)\). The expectation for a state \(\rho\), density operator, is \(\langle \hat{O} \rangle_\rho = \text{tr}(\rho \hat{O}).\)
Axiom 1: (Measurement Data): The time-series \(s_j(t)\) corresponds to measuring the “intensity observable” \(\hat{S}\) \(s_j(t) = \langle \hat{S} \rangle_{\rho_j(t)} + \epsilon_j(t), \quad \epsilon_j \sim \text{noise},\) where \(\rho_j(t)\) is the state at \((t, \theta_j(t))\).
Definition 6: (Kime-Test Functions): Let \(\mathcal{D}(\mathcal{K})\) be the space of
smooth, compactly supported test functions \(\varphi(t, \theta)\). The distribution
action of \(\Phi(t)\) is
\(\langle \Phi, \varphi \rangle_t = \int_{S^1}
\varphi(t, \theta) d\Phi(t)(\theta).\)
Definition 7: (Kime-Transform): The Kime-Fourier
Transform (KFT) for \(\Phi(t)\)
is
\(\mathcal{F}_\kappa[\Phi](n, t) = \int_{S^1}
e^{-in\theta} d\Phi(t)(\theta), \quad n \in \mathbb{Z}\)
recovering Fourier coefficients of \(\Phi(t)\) at each \(t\).
The following result illustrates the tomographic kime-phase reconstruction theorem.
Let \(t>0\) be fixed and assume that the kime-surface height observable admits the spectral representation
\[\widehat{S}(t)=\sum_{n\in\mathbb Z} f_n(t)\,e^{in\widehat{\Theta}}, \qquad f_{-n}(t)=\overline{f_n(t)}, \tag{B.1}\] where \(\widehat{\Theta}\) is a (self‑adjoint) phase operator whose spectrum is the unit circle and whose functional calculus satisfies \(\exp\{in\widehat{\Theta}\}\,|{\theta}\rangle = e^{in\theta}|{\theta}\rangle\). For independent realizations \(\{s_j(t_k),\theta_{j,k}\}_{\overbrace{1\le j\le M}^{repeats},\;\overbrace{1\le k\le K}^{time}}\), define the empirical mixed moment
\[\widehat m_{n}(t):=\frac{1}{M}\sum_{j=1}^{M}s_j(t)\,e^{-in\theta_{j}}, \qquad n\in\mathbb Z. \tag{B.2}\]
Theorem 3 (Information‑completeness, empirical form): If \(\mathbb E|s_1(t)|^{2}<\infty\) and the kime-phase distribution derivative \(\varphi_t(\theta)=\dfrac{d\Phi_t}{d\theta}\in L^{2}(S^{1})\), then for each \(n\) \[\boxed{\; \widehat m_{n}(t)\;\xrightarrow[M\to\infty]{\text{a.s.}}\; f_n(t)\,\mathcal F[\varphi_t](n) }, \tag{B.3} \] where the Fourier transform is \(\mathcal F[\varphi_t](n)=\int_{-\pi}^{\pi}e^{-in\theta}\varphi_t(\theta)\,\dfrac{d\theta}{2\pi}\).
Proof: Let;s start with the KKM decomposition. Equation (B.1) implies \[s_j(t)=\langle\psi_j|\widehat{S}(t)|\psi_j\rangle =\sum_{n}f_n(t)\,e^{in\theta_j}, \tag{B.4} \] where \(\theta_j\) is the phase outcome (eigenvalue of \(\widehat{\Theta}\)) obtained in the \(j\)-th experiment.
Since the \(\theta_j\) are i.i.d. with density \(\varphi_t\), the expectaiton is \[\mathbb E\bigl[s_j(t)\,e^{-in\theta_j}\bigr] =\sum_{k}f_k(t)\,\underbrace{\mathbb E\bigl[e^{i(k-n)\theta_j}\bigr]}_{\mathcal F[\varphi_t](n-k)} =f_n(t)\,\mathcal F[\varphi_t](n). \tag{B.5}\]
The summands in (B.2) are i.i.d. with finite first moment, so by the strong law of large numbers \[\widehat m_{n}(t)\;\xrightarrow{\text{a.s.}}\; \mathbb E\bigl[s_1(t)\,e^{-in\theta_1}\bigr],\] and equation (B.5) gives the desired limit (B.3). \(\square\)
Note that directly explicating the operator domains and using explicit Fourier coefficient \(\mathcal F[\varphi_t](n)\), the more formal convergence stated in equation (B.3) can be informally stated as \[\lim_{M\to\infty}\frac1M\sum_{j=1}^{M}s_j(t)\,e^{-in\theta_j} =f_n(t)\,\mathcal F[\Phi](n,t).\]
The Information‑completeness (Theorem 3) directly connects to the Identifiability (Theorem 1), as knowing the entire sequence \(\{\widehat m_n(t)\}\) gives the full set of Fourier coefficients of \(\varphi_t\), and hence completely described the kime-phase distribution \(\Phi_t\) itself. Also, equation (B.3) reflects a finite‑\(t\) instance of the Kime-Uncertainty Principle (Theorem 2), as uniform‑in‑\(t\) consistency follows from standard triangular‑array sLLN arguments for a dense grid \(\{t_k\}\).
Corollary 3: If \(f_n(t) \neq 0\), \(\Phi(t)\) is reconstructed via \[\mathcal{F}_\kappa[\Phi](n,t) = \lim_{M \to \infty} \frac{1}{M f_n(t)} \sum_j s_j(t) e^{-in\theta_j}.\]
Step 1: Empirical Estimation of Moments. For discrete time bins \(\{t_k\}\), compute the empirical moments \(\hat{m}_n(t_k) = \frac{1}{M} \sum_{j=1}^M s_j(t_k) e^{-in\theta_j}\) and estimate \(f_n(t_k)\) via basis regression (e.g., \(f_n(t) = t \alpha_n\)).
Step 2: Inverse Kime-Transform. Solve for \(\Phi(t)\) at each \(t_k\)
\(\Phi(t_k) = \arg \min_\Phi \sum_n \left|
\hat{m}_n(t_k) - f_n(t_k) \mathcal{F}_\kappa[\Phi](n,t_k)
\right|^2.\)
Theorem 4: (Consistency): When \(\theta_j\) are i.i.d. \(\sim \Phi(t)\), and \(f_n(t) \neq 0\), \(\hat{\Phi}(t) \xrightarrow{M \to \infty} \Phi(t)\) in distribution.
Proof: Follows directly from the Law of Large Numbers due to the continuity of \(\mathcal{F}_\kappa^{-1}\).
Example: (Laplace Phase Reconstruction): Let’s assume \(\Phi(t) = \text{Laplace}(\mu(t), b(t))\) on \(S^1\) with density \[p(\theta|t) = \frac{\beta(t)}{2\sinh \beta(t)} e^{\beta(t) \cos(\theta - \mu(t))}, \quad \beta(t) = 1/b(t).\]
Then the KFT coefficients are
\[\mathcal{F}_\kappa[\Phi](n,t) =
I_n(\beta(t)) / I_0(\beta(t)) \cdot e^{in\mu(t)},\] where \(I_n\) is the modified Bessel function.
The KPT reconstruction involves estimation of \(\hat{m}_n(t_k)\) from \(\{s_j(t_k)\}\), solving for \(\mu(t_k), \beta(t_k)\) via
\[|\hat{m}_1(t_k)| = \left|
\frac{I_1(\beta(t_k))}{I_0(\beta(t_k))} \right|, \quad
\arg(\hat{m}_1(t_k)) = \mu(t_k),\]
and interpolation of \(\mu(t),
\beta(t)\) across \(t\).
After the kime-phase distribution \(\Phi(t)\) is reconstructed, the kime-manifold embedding gives the kime-surface \(\mathcal{M}\) \[\mathcal{M} = \left\{ \left( t\cos\theta, t\sin\theta, \mathbb{E}_j[s_j(t)] \right) \mid t \geq 0, \theta \sim \Phi(t) \right\}.\]
The physical interpretations of kime tomography reflects a quantum analogy where \(\Phi(t)\) is the state reconstructed from measurements \(s_j(t)\) via operator expectations. Analogously to QM, the significance of the kime-operator commutator is realized as non-commutativity probes sensitivity to phase perturbations, enabling resolution in tomography. The kime-space information completeness is guarantees as \(\{e^{in\hat{\Theta}}\}_{n \in \mathbb{Z}}\) spans \(\mathcal{H}_\kappa\), and hence, \(\Phi(t)\) is uniquely reconstructible (cf. Pauli’s theorem in QM).
In this KPT Overview section, we presented the main ideas of kime-phase tomography without some of the technical foundations that are included in this more detailed KPT formalism. The Table below showcases some of the extensions of the above basics KPT overview, which are extended in the more advanced KPT mathematical foundations presented in the next section.
Aspects | Basic KPT (Overview) | Advanced KPT |
---|---|---|
Mathematical setting | Uses intuitive statements; e.g., ignoring \(\sigma\)‑algebra, probability space, and measurability of random fields | Explicates the probability triple \((\Omega,\mathcal F,\mathbb P)\) and defines \(t,\theta,s_j\) as measurable maps |
Distribution of \(t\) | Uses “Uniform \([0,\infty)\)” without normalizing constant | Corrects by using the truncated uniform on \([0,T]\) with \(T\to\infty\), introduces an improper prior and works under conditional densities |
Phase law \(\Phi(t)\) | Introduced as a “circular Laplace” without the wrapped‑Laplace pdf written later is not normalized in \(\theta\) | Provides a wrapped‑Laplace or von Mises pdf with explicit normalizer, dependent parameters \((\mu(t),\beta(t))\). |
Map \(\{s_j(t)\}\to \mathcal M\) | Definition (6) of \(\mathbf p_{j,k}\) needs to analyze the continuity, injectivity, and topological type of \(\mathcal M\) | Offers a precise embedding \(F: (t,\theta)\mapsto\mathbb R^3\), shows \(F\) is \(C^1\) a.s., and includes the kimesurface metric |
Tomography (inverse) problem | Offers an analogy with quantum tomography, but identifiability and completeness results are over-simplified | Formulates and proves an identifiability theorem (Kime Pauli theorem) and gives a constructive inversion algorithm with error bounds |
Statistical consistency | Theorem 1 on kernel regression cites classical results without explicit conditions for circular covariate | Extends to mixed radial/circular covariate; specifies bandwidth sequence \((h_n)\) and moment assumptions |
Let’s extend the KPT Overview section overview above to include all necessary the technical details. First, we will limit the kime-phase distribution to Laplace o von Mises, and later will generalize this to any symmetric (periodic) distribution supported on \(S^1\).
Let \((\Omega,\mathcal F,\mathbb P)\) be a complete probability space and define the radial time as \[t:\Omega\to\mathbb R_{\ge 0},\qquad t\sim\mathcal U_T:=\mathsf{Uniform}(0,T)\;(T<\infty),\]
with the improper‑uniform limit recovered by letting \(T\to\infty\) in statements that remain finite. Conditional on \(t\), let the phase be \[\theta:\Omega\to(-\pi,\pi],\qquad \theta\mid t\sim\Phi_{(\mu(t),\beta(t))},\] where \(\Phi\) is the wrapped Laplace (or von Mises) family with density
\[f_{\Phi}(\theta\mid t)= \frac{\beta(t)}{2\sinh\!\bigl(\beta(t)\bigr)} \exp\!\bigl\{\beta(t)\cos\bigl(\theta-\mu(t)\bigr)\bigr\}. \tag{1}\]
The complex‑time (kime) coordinate is the measurable map
\[\kappa:\Omega\to\mathbb C,\qquad \kappa(\omega)=t(\omega)\,e^{i\theta(\omega)}.\]
When obseving repeatedly a longitudinal process, let \(\bigl\{s_j(t):t\ge 0,\;j=1,\dots,N\bigr\}\) be i.i.d. real‑valued stochastic processes on \((\Omega,\mathcal F,\mathbb P)\) with continuous sample paths and second moments. Observations are made at design points
\[\{t_{j,k}\}_{k=1}^{M_j}\subset(0,T] \quad\Longrightarrow\quad \mathcal D=\bigl\{(j,k):s_{j,k}=s_j(t_{j,k})\bigr\}.\]
Of course, additive noise models, e.g., \(s\) plus measurement error, can also be incorporated, but are omitted here for clarity of presentation.
Let’s define the embedding map
\[F:\;(0,T]\times(-\pi,\pi] \longrightarrow \mathbb R^3,\qquad F(t,\theta)= \bigl(t\cos\theta,\;t\sin\theta,\;Z(t,\theta)\bigr), \tag{2}\]
where \(Z(t,\theta)=\mathbb E\bigl[s_1(t)\bigr]\) is the mean intensity. Assuming \(Z\in C^1\bigl((0,T]\times(-\pi,\pi]\bigr)\), \(F\) is a \(C^1\) immersion and its image \(\mathcal M:=F\!\bigl((0,T]\times(-\pi,\pi]\bigr)\subset\mathbb R^3\) is a 2‑dimensional manifold with boundary, e.g., a punctured cone when \(Z\) is radial.
In the kime-coordinates, in coordinates \((t,\theta)\), the induced metric \(g\) is
\[g=\begin{pmatrix} 1+Z_t^2 & Z_t Z_\theta\\[4pt] Z_t Z_\theta & t^2+Z_\theta^2 \end{pmatrix}, \qquad Z_t=\partial_t Z,\;Z_\theta=\partial_\theta Z.\]
And the curvatures follow from standard metric curvature formulas; see Appendix A.
Consider the forward measurement model. For each datum \((j,k)\) draw an independent phase sample \(\vartheta_{j,k}\mid t_{j,k}\;\sim\;\Phi_{(\mu(t_{j,k}),\beta(t_{j,k}))},\) and form the Euclidean coordinate \[\mathbf p_{j,k}=F\!\bigl(t_{j,k},\vartheta_{j,k}\bigr) =\bigl(x_{j,k},y_{j,k},s_{j,k}\bigr). \tag{3}\]
Equation (3) realizes a random point cloud sampling of the kime‑surface. When \(M:=\sum_jM_j\to\infty\), the cloud becomes dense under mild conditions (law of large numbers on manifolds).
The corresponding inverse problem (Kime‑phase tomography) aims to recover the unknown conditional phase law \(\Phi_{(\mu(t),\beta(t))}\) from \(\mathcal D\) and hence reconstruct the embedding, eq. (2).
The harmonic expansion assumption requires that for each fixed \(t\) the process \(s_j(t)\) admits a circular harmonic expansion
\[\mathbb E\bigl[s_j(t)\,|\,\theta\bigr] =\sum_{n\in\mathbb Z}f_n(t)\,e^{in\theta}, \qquad f_{-n}=\overline{f_n}. \tag{4}\]
To state the “Kime–Pauli” identifiability theorem, let’s define the empirical moments \[\widehat m_n(t_k)=\frac1N\sum_{j=1}^{N}s_{j,k}\,e^{-in\vartheta_{j,k}}. \tag{5}\]
Using eq. (4), \(\mathbb E[s_j(t)e^{-in\theta}]=f_n(t)\,\varphi_n(t)\) where \(\varphi_n(t)\) is the \(n\)-th circular Fourier coefficient of \(f_\Phi(\theta\mid t)\). We’ll prove the following Information Completeness Theorem using the fact that the moment sequence uniquely identifies \(\{\varphi_n(t)\}\), \(f_n(t)\neq 0\), and Fourier inversion recovers \(f_\Phi\).
Theorem 1: (Information Completeness): Let’s fix \(t\in(0,T]\) and let \(m_{n}(t):=\mathbb E\!\bigl[s_1(t)\,e^{-in\theta}\bigr], \qquad n\in\mathbb Z,\) where \(s_1(t)\) admits the harmonic expansion \(\mathbb E\!\bigl[s_1(t)\mid \theta\bigr]=\sum_{n\in\mathbb Z}f_n(t)\,e^{in\theta}, \quad f_{-n}=\overline{f_n}.\) Assume that there exists at least one \(n_\star\neq0\) with \(f_{n_\star}(t)\neq0\) and the conditional phase density \(f_\Phi(\,\cdot\mid t)\in L^{2}(S^{1})\). Then the map \(f_\Phi(\,\cdot\mid t)\;\longmapsto\; \bigl\{m_{n}(t)\bigr\}_{n\in\mathbb Z}\) is injective; i.e. the entire moment sequence determines \(f_\Phi\) uniquely.
Proof: Let’s consider the moment factorization using the law of total expectation and eq. (4) \[m_{n}(t)=\mathbb E_\theta\!\bigl[\,\mathbb E[s_1(t)\mid\theta]\,e^{-in\theta}\bigr] \;=\;\sum_{k\in\mathbb Z} f_k(t) \underbrace{\mathbb E_\theta[e^{i(k-n)\theta}]}_{=\varphi_{n-k}(t)},\] where \(\varphi_{m}(t)=\int_{-\pi}^{\pi}e^{im\theta}\,f_\Phi(\theta\mid t)\,d\theta\) is the \(m\)-th Fourier coefficient of \(f_\Phi\).
Setting \(n=n_\star\) and using \(f_{n_\star}(t)\neq0\) gives \[\varphi_{0}(t)=\frac{m_{n_\star}(t)}{f_{n_\star}(t)}. \tag{1.1}\]
To recover of all \(\varphi_{m}(t)\), fix any \(m\neq0\) and choose \(n=n_\star+m\). Then \[m_{n_\star+m}(t) =f_{n_\star}(t)\varphi_{m}(t)+\sum_{k\neq n_\star}f_k(t)\varphi_{n_\star+m-k}(t).\]
The term \(\varphi_{0}(t)\) is shown in eq. (1.1) and we can proceed inductively on \(|m|\) starting with \(|m|=1\). The linear system is upper‑triangular in \(\varphi_{m}\) and hence solvable because \(f_{n_\star}(t)\neq0\). Thus, every Fourier coefficient \(\{\varphi_m(t)\}_{m\in\mathbb Z}\) is uniquely obtained from the moments \(\{m_n(t)\}\).
The uniqueness of the density follows since \(f_\Phi(\,\cdot\mid t)\in L^{2}(S^{1})\) and its Fourier series converges to the function in \(L^{2}\)‑norm. Two \(L^{2}\) functions with identical Fourier coefficients coincide almost everywhere. Therefore, no two different circular densities can produce the same moment sequence, establishing injectivity. \(\square\)
The following theorem guarantees the consistency of the estimator. It relies on a plug‑in inversion \(\widehat \varphi_n(t_k)=\widehat m_n(t_k)/f_n(t_k)\) followed by truncated Fourier synthesis, to shows the estimator consistency, \(\widehat \Phi(t_k)\), and a kernel smoothing in \(t\) to recover \(\widehat\Phi(t)\) on \((0,T]\).
Theorem 2 (LLN‑based consistency). For a fixed radial location \(t_k\) and integer \(n\), define \[\widehat m_{n}(t_k)=\frac{1}{N}\sum_{j=1}^{N} X_{j}^{(n,k)}, \qquad X_{j}^{(n,k)} := s_{j,k}\,e^{-in\vartheta_{j,k}},\] where \(\vartheta_{j,k}\stackrel{\text{i.i.d.}}{\sim}\Phi(\,\cdot\mid t_k)\) and \(\{s_{j,k}\}_{j=1}^{N}\) are independent copies of \(s_1(t_k)\) satisfying \(\sup_{j,k}\mathbb E|s_{j,k}|^{2}<\infty\). Then, \[\widehat m_{n}(t_k)\xrightarrow[N\to\infty]{\text{a.s.}} m_{n}(t_k)=\mathbb E[X_{1}^{(n,k)}].\]
Proof: For fixed \(t_k\) the pairs \(\{(s_{j,k},\vartheta_{j,k})\}_{j=1}^{N}\) are independent and identically distributed by hypothesis. Hence, so are the random variables \(X_{j}^{(n,k)}\).
The finite variance follows by the Cauchy–Schwarz enequality, \[\mathbb E\left |X_{j}^{(n,k)}\right |^{2}=\mathbb E|s_{j,k}|^{2}\le \sup_{j,k}\mathbb E|s_{j,k}|^{2}<\infty.\]
The Kolmogorov Strong Law of Large Numbers (sLLN) indicates that if \(X_1,X_2,\dots\) are i.i.d. with \(\mathbb E|X_1|<\infty\), then \(\frac1N\sum_{j=1}^{N}X_j\to\mathbb E[X_1]\) almost surely. In our case, \(\mathbb E|X_{1}^{(n,k)}| \le (\mathbb E|s_{1,k}|^{2})^{1/2}<\infty\), so the sLLN applies and yields \(\widehat m_{n}(t_k)\;\xrightarrow{\text{a.s.}}\;\mathbb E[X_{1}^{(n,k)}]=m_{n}(t_k).\)
Because the argument is carried out separately for each
fixed \(t_k\), we do not require
joint convergence along a growing grid of \(k\). If a finite set of \(\{t_k\}\) is considered, take the
intersection of the corresponding probability‑one events.
When \(k\) also grows, we can apply a
uniform sLLN (Kolmogorov–Chentsov or Vapnik–Červonenkis) once
measurability and envelope conditions for \(\{X_{j}^{(n,k)}\}\) are verified. Hence,
the estimator \(\widehat m_{n}(t_k)\)
is strongly consistent. \(\square\)
Because \(\widehat f_{n}\) is obtained from a consistent regression or known a priori and each empirical moment is strongly consistent, the plug‑in estimator \[\widehat\varphi_{n}(t_k)=\widehat m_{n}(t_k)\bigl/\widehat f_{n}(t_k),\] inherits strong consistency whenever \(\widehat f_{n}(t_k)\) converges in probability to a non‑zero limit. Therefore, a truncated Fourier inversion of the consistent sequence \(\{\widehat\varphi_n(t_k)\}_{|n|\le n_{\max}}\) yields a consistent reconstruction \(\widehat\Phi(t_k)\) of the circular kime-phase density.
Step | Action | ||
---|---|---|---|
1. Phase sampling | For each \((j,k)\) simulate \(\vartheta_{j,k}\) by inverse‑CDF or accept–reject on eq. (1) | ||
2. Moment computation | Compute \(\widehat m_n(t_k)\) for \(| n | \le n\_{\max}\) using eq. (5) | ||
3. Coefficient estimation | If \(f_n(t)\) is unknown, fit \(f_n(\cdot)\) by non‑parametric regression on \((t_k,\widehat m_n(t_k))\) | ||
4. Inversion | Obtain \(\widehat\Phi(t_k)\) by Fourier inversion of \(\{\widehat\varphi_n(t_k)\}\) | ||
5. Surface rendering | Build \(\widehat Z(t,\theta)=\sum_n \widehat f_n(t)e^{in\theta}\) and embed via eq. (2) to visualize \(\widehat{\mathcal M}\) |
In practice, the bandwidths and truncation orders \((h,n_{\max})\) can be selected by leave‑one‑trajectory‑out cross‑validation.
To obtain higher‑order manifolds requires extending \(t\in\mathbb R^d\) with \(d>1\) to a \(d+1\)‑dimensional kime‑surface in \(\mathbb R^{d+2}\). Additive observation noise \(s_{j,k}=S_{j,k}+\varepsilon_{j,k}\) may affect the moment variance but not identifiability under zero‑mean assumption. In eq. (1), we can also use alternative phase laws such as wrapped Cauchy or non‑parametric circular densities. Theorems 1 and 2 hold with \(f_\Phi\in L^2(S^1)\). Finally, a geometric inference based on the curvature of \(\mathcal M\) encodes dynamical features of the underlying process and bootstrap on \(\widehat g\) provides quantification of uncertainty.
Next we will expand the details in previous section starting with fully explicating the construction of the probability space, \((\Omega,\mathcal F,\mathbb P)\), that underlies kime‑phase tomography (KPT). Then will well generalize the phase law beyond the wrapped‑Laplace or vonMises distributions.
The probability space \((\Omega,\mathcal F,\mathbb P)\) involves the following.
Component | Symbol | Measurable structure | Description |
---|---|---|---|
Radial time | \(\Omega_t=(0,\infty)\) | Borel \(\sigma\)‑algebra \(\mathcal B((0,\infty))\) | One outcome \(t\) is drawn for each realization |
Phase | \(\Omega_\theta=S^{1}=(-\pi,\pi]\) | Borel \(\sigma\)‑algebra \(\mathcal B(S^{1})\) | Conditional on \(t\), an angular outcome \(\theta\) is drawn from some circular density |
Longitudinal paths | \(\Omega_s=C([0,T];\mathbb R)^{N}\) | Cylindrical \(\sigma\)‑algebra generated by point evaluations $ _{j,}(s)=s_j()$ | A vector of \(N\) continuous‑time sample paths $ s=(s_1,,s_N)$ |
Indexing noise (optional) | \(\Omega_\varepsilon=\mathbb R^{\infty}\) | Product Borel \(\sigma\)‑algebra | Measurement errors \(\{\varepsilon_{j,k}\}\) if needed |
The master sample space is the Cartesian product
\[\boxed{\;\Omega=\Omega_t\times\Omega_\theta\times\Omega_s\times\Omega_\varepsilon\;}\]
with \(\Omega_\varepsilon\) omitted if no measurement noise is modeled.
The \(\sigma\)‑algebra is
\[\boxed{\; \mathcal F=\mathcal B((0,\infty))\otimes \mathcal B(S^{1})\otimes \sigma\!\bigl\{\pi_{j,\tau}\bigr\}_{1\le j\le N,\;0\le\tau\le T} \otimes \mathcal B(\mathbb R^{\infty}) \;}\]
and the corresponding probability measure is
\[ \boxed{\; \mathbb P=\mathbb P_t\otimes\mathbb P_{\theta\mid t}\otimes \bigl(\mathbb P_{s}\bigr)^{\otimes N}\otimes\mathbb P_\varepsilon \;} \] where \(\mathbb P_t\) is a (truncated) uniform or any proper law on \((0,\infty)\), \(\mathbb P_{\theta\mid t}\) is a family of circular distributions, \(\{\Phi_t\}_{t>0}\), \(\mathbb P_s\) is the law of one real‑valued, second‑moment continuous process \(s(t)\); i.i.d. copies indexed by \(j\), and \(\mathbb P_\varepsilon\) is a product measure for i.i.d. zero‑mean noise \(\varepsilon_{j,k}\), which is set to a point mass at \(0\) if the noise is ignored.
Each coordinate map, e.g. \(t(\omega)\), \(\theta(\omega)\), \(s_j(\cdot)(\omega)\), is measurable by construction, so all derived maps—such as the complex‑time coordinate \(\kappa(\omega)=t(\omega)\,e^{i\theta(\omega)}\) are measurable as well.
We will explore the minimal assumptions to ensure the kime-phase law in full generalizable. For every \(t>0\), let \(\Phi_t\) be a probability measure on \(S^{1}\), subject to the following 3 assumptions:
Examples of distributions that satisfy these assumptions include Wrapped Laplace, von Mises, wrapped Cauchy, wrapped Gaussian, mixtures, and non‑parametric kernel densities all satisfy \(\{(A1), (A2), (A3)\}\).
In the general case, a more unified notation can be used \[\theta\mid t\;\sim\;\Phi_t,\qquad d\Phi_t(\theta)=\varphi_t(\theta)\,\frac{d\theta}{2\pi},\qquad \varphi_t\in L^{2}(S^{1})\; \forall t.\]
In practice, a parametric family may support reduction of complexity or dimensionality, in which case, the paretric distribution may explicate its parameters \(\Phi_{(\eta(t))}\) with parameter curve \(\eta(t)\) taking values in some finite‑dimensional manifold. For instance, the wrapped‑Laplace and von Mises cases above correspond to the special choice \(\eta(t)=(\mu(t),\beta(t))\) and density shown in eq. (1).
Because \(\varphi_t\in L^{2}(S^{1})\), it has Fourier coefficients \[\varphi_{t,n}:=\int_{-\pi}^{\pi}e^{-in\theta}\,\varphi_t(\theta)\,\frac{d\theta}{2\pi}, \qquad n\in\mathbb Z,\] and the series \(\theta\mapsto\sum_{n}\varphi_{t,n}e^{in\theta}\) converges to \(\varphi_t\) in \(L^{2}\). This is exactly the requirement in Theorem 1. Hence, the identifiability proof above is unchanged.
The table below illustrated that the earlier proofs remain valid as the corresponding argument directly generalize.
Section | Dependence on a specific \(\Phi\) | Need for Revision | How to phrase generally |
---|---|---|---|
Embedding \(F(t,\theta)\) | None (only needs angles) | No | Keep \(F(t,\theta)=(t\cos\theta,\,t\sin\theta,\,Z(t,\theta))\) |
Curvatures (see Appendix A) | Only uses partials of \(Z\) | No | Unchanged |
Theorem 1 (information completeness) | Uses Fourier of phase density; needs (A3) | No | Replace “wrapped Laplace density” by “any \(L^{2}\) circular density \(\varphi_t\)” |
Theorem 2 (LLN consistency) | No assumption on \(\Phi_t\) beyond finite variance of \(e^{-in\theta}\) (automatic) | No | Proof is unchanged |
Algorithm step 1 (moment calculation) | Works for any phase law because only \(\theta\) appears inside \(e^{-in\theta}\) | No | |
Algorithm step 4 (parametric inversion) | A model (e.g. von Mises) is optional | Optional | Offer both parametric (fewer moments) and non‑parametric (Fourier series) options |
Hence only the model statement in the set‑up section may need revisions.
Phase law (general form): Conditional on \(t>0\), draw a phase \[ \boxed{\; \theta\mid t\;\sim\;\Phi_t,\qquad \Phi_t\hbox{ supported on }S^{1},\; d\Phi_t(\theta)=\varphi_t(\theta)\,\frac{d\theta}{2\pi},\; \varphi_t\in L^{2}(S^{1}), \;}\] where \(\varphi_t\) is any symmetric (or asymmetric, if desired) circular density satisfying \(\int_{-\pi}^{\pi}\varphi_t(\theta)\tfrac{d\theta}{2\pi}=1\). The wrapped‑Laplace and von Mises distributions represent examples, but these cases are not limiting the KPT theory.
When implementing hte most general KPT algorithm, we need to consider
rwrappedcauchy
, rvonmises
, or a
kernel density sampler) for \(\theta_{j,k}\).This suggests that KPT is agnostic to the specific circular distribution chosen for the phase. All theoretical guarantees require only that \(\varphi_t\in L^{2}(S^{1})\), so its Fourier series exists.
The TCIU Appendix includes implementation of a simulated fMRI time-series and the corresponding the Kime-Phase Tomography (KPT) fMRI Time-Series Simulation showing the mapping of repeated fMRI time-series measurements to 2D kime-surfaces. This simulation specifically tests the KPT algorithm on synthetic fMRI data with ON (stimulus) and OFF (rest) conditions.
To study the geometry of the kime‑surface, let’s start the smooth kimesurface embedding \[F:(0,T]\times(-\pi,\pi]\;\longrightarrow\;\mathbb R^{3}, \qquad F(t,\theta)=\bigl(x(t,\theta),y(t,\theta),Z(t,\theta)\bigr) =(t\cos\theta,\;t\sin\theta,\;Z(t,\theta)),\] with \(Z\in C^{2}((0,T]\times(-\pi,\pi])\) and express the partial derivatives as \(Z_t=\partial_tZ,\;Z_\theta=\partial_\theta Z,\) etc.
The first fundamental form is
\[\begin{aligned} E&=\langle X_t,X_t\rangle =1+Z_t^{2},\\[2pt] F&=\langle X_t,X_\theta\rangle =Z_tZ_\theta,\\[2pt] G&=\langle X_\theta,X_\theta\rangle =t^{2}+Z_\theta^{2}. \end{aligned}\] and the Normal vector is \[W=X_t\times X_\theta =(\sin\theta\,Z_\theta-t\cos\theta\,Z_t,\; -\cos\theta\,Z_\theta-t\sin\theta\,Z_t,\; t),\qquad \|W\|^{2}=t^{2}\bigl(1+Z_t^{2}\bigr)+Z_\theta^{2}.\]
Let’s define the squared normal magnitude,
\[\Delta :=\boxed{\;\|W\|^{2}=EG-F^{2} \;} \tag{A.1}\]
Then the second fundamental form is
\[\begin{aligned} e_{\text{num}} &= t\,Z_{tt},\\ f_{\text{num}} &= t\,Z_{t\theta}-Z_\theta,\\ g_{\text{num}} &= t\bigl(tZ_t+Z_{\theta\theta}\bigr). \end{aligned} \qquad e=\frac{e_{\text{num}}}{\|W\|},\; f=\frac{f_{\text{num}}}{\|W\|},\; g=\frac{g_{\text{num}}}{\|W\|}.\]
Using \(K=(eg-f^{2})/(EG-F^{2})\) and equation (A.1), the Gaussian curvature is
\[\boxed{\; \underbrace{K}_{Gaussian}=\frac{e_{\text{num}}g_{\text{num}}-f_{\text{num}}^{2}} {\Delta^{2}} \;=\; \frac{\displaystyle t^{2}Z_{tt}\bigl(tZ_t+Z_{\theta\theta}\bigr)\;-\; \bigl(tZ_{t\theta}-Z_\theta\bigr)^{2}} {\bigl[t^{2}\bigl(1+Z_t^{2}\bigr)+Z_\theta^{2}\bigr]^{2}} } \tag{A.2}\]
Note that the overall sign of the numerator is positive, not negative.
Also, the mean curvature
\[\boxed{\; \underbrace{H}_{mean}=\frac{1}{2}\frac{Eg+Ge-2Ff}{EG-F^{2}} =\frac{(1+Z_t^{2})\,g_{\text{num}}+(t^{2}+Z_\theta^{2})\,e_{\text{num}} -2Z_tZ_\theta\,f_{\text{num}}} {2\,\Delta^{3/2}} } \tag{A.3}\]
may be expanded as
\[H=\frac{(1+Z_t^{2})\,t\bigl(tZ_t+Z_{\theta\theta}\bigr) +(t^{2}+Z_\theta^{2})\,tZ_{tt} -2Z_tZ_\theta\bigl(tZ_{t\theta}-Z_\theta\bigr)} {2\,\bigl[t^{2}(1+Z_t^{2})+Z_\theta^{2}\bigr]^{3/2}}.\]
Finally, the principal curvatures are
\[\underbrace{k_{1,2}}_{principal}=H\;\pm\;\sqrt{H^{2}-K}.\]
With respect to the kime coordinate basis \((X_t,X_\theta)\), the shape operator is
\[\boxed{\; S=I^{-1}II =\frac{1}{\Delta} \begin{pmatrix} G & -F\\ -F & E \end{pmatrix} \begin{pmatrix} e_{\text{num}} & f_{\text{num}}\\ f_{\text{num}} & g_{\text{num}} \end{pmatrix} \frac{1}{\|W\|} }\tag{A.4} \]
Jointly, equations (A.2) - (A.4) give a complete and full account of the intrinsic and extrinsic geometry of kime‑surfaces. All classical surface invariants (Christoffel symbols, geodesics, Laplace–Beltrami operator, etc.) can now be written explicitly from the scalar coefficients \(E,F,G\) and their \(t,\theta\) derivatives.