# 1 The effects of Kime-Magnitudes and Kime-Phases

Jointly, the amplitude spectrum (magnitudes) and the phase spectrum (phases) uniquely describe the spacetime representation of a signal. However, the importance of each of these two spectra is not equivalent. In general, the effect of the phase spectrum is more important compared to the corresponding effects of the amplitude spectrum. In other words, the magnitudes are less susceptible to noise or the accuracy of their estimations. The effects of magnitude perturbations are less critical relative to proportional changes in the phase spectrum. For instance, particularly in terms of spacetime locations where the signal is zero, the signal can be reconstructed (by the IFT) relatively accurately using incorrect magnitudes solely by using the correct phases REF. For a real valued signal $$f$$, suppose the amplitude of its Fourier transform, $$FT(f)=\hat{f}$$, is $$A(\omega) > 0, \forall \omega$$, then: $f(x)=IFT(\hat{f})=Re\left (\frac{1}{2\pi}\int_{R} \underbrace{A(\omega)e^{i\phi(\omega)}}_{\hat{f}(\omega)}\ e^{i\omega x}d\omega \right)= Re\left (\frac{1}{2\pi}\int_{R}A(\omega)e^{i(\phi(\omega)+\omega x)}d\omega\right) = \frac{1}{2\pi}\int_{R} {A(\omega) \cos(\phi(\omega)+\omega x)}d\omega.$

Thus, the zeros of $$f(x)$$ occur for $$\omega x+ \phi(\omega)=\pm k\frac{\pi}{2}$$, $$k= 1,2,3,.$$.

A solely amplitude driven reconstruction $$\left ( f_A(x)=IFT(\hat{f})=\frac{1}{2\pi}\int_{R}\underbrace{A(\omega)}_{no\ phase}\ e^{i\omega x}d\omega \right)$$ would yield worse results than a solely-phase based reconstruction $$\left ( f_{\phi}(x)=IFT(\hat{f})=\frac{1}{2\pi} \int_{R}\underbrace{e^{i\phi(\omega)}}_{no\ amplitude}\ e^{i\omega x}d\omega\right )$$. The latter would have a different total energy from the original signal, however, it would include some signal recognizable features as the zeroth-level curves of the original $$f$$ and the phase-only reconstruction $$f_{\phi}$$ signals will be preserved. This suggests that the Fourier phase of a signal is more informative than the Fourier amplitude, i.e., the magnitudes are robust to errors or perturbations.

In X-ray crystallography, crystal structures are bombarded by particles/waves, which are diffracted by the crystal to yield the observed diffraction spots or patterns. Each diffraction spot corresponds to a point in the reciprocal lattice and represents a particle wave with some specific amplitude and a relative phase. Probabilistically, as the particles (e.g., gamma-rays or photons) are reflected from the crystal, their scatter directions are proportional to the square of the wave amplitude, i.e., the square of the wave Fourier magnitude. X-rays capture these amplitudes as counts of particle directions, but miss all information about the relative phases of different diffraction patterns.

Spacekime analytics are analogous to X-ray crystallography, DNA helix modeling, and other applications, where only the Fourier magnitudes (time), i.e., power spectrum, is only observed, but not the phases (kime-directions), which need to be estimated to correctly reconstruct the intrinsic 3D object structure REF, in our case, the correct spacekime analytical inference. Clearly, signal reconstruction based solely on either the amplitudes or the phases is an ill-posed problem, i.e., there will be many alternative solutions. In practice, such signal or inference reconstructions are always application-specific, rely on some a priori knowledge on the process (or objective function), or depend an information-theoretic criteria to derive conditional solutions. Frequently, such solutions are obtained via least squares, maximum entropy criteria, maximum a posteriori distributions, Bayesian estimations, or simply by approximating the unknown amplitudes or phases using prior observations, similar processes, or theoretical models.

# 2 Solving the Missing Kime-Phase Problem

There are many alternative solutions to the problem of estimating the unobserved kime-phases. All solutions depend on the quality of the data (e.g., noise), the signal energy (e.g., strength of association between covariates and outcomes), and the general experimental design. There can be rather large errors in the phase reconstructions, which will in turn effect the final spacekime analytic results. Most phase-problem solutions are based on the idea that having some prior knowledge about the characteristics of the experimental design (case-study phenomenon) and the desired inference (spacekime analytics). For instance, if we artificially load the energy of the case-study (e.g., by lowering the noise, increasing the SNR, or increasing the strength of the relation between explanatory and outcome variables), the phases computed from the this stronger-signal dataset will be more accurate representations than the original phase estimates. Examples of phase-problem solutions include energy modification and fitting and refinement methods.

## 2.1 Energy Modification Strategies

In general, energy modification techniques rely on prior knowledge, testable hypotheses, or intuition to modify the dataset by strengthening the expected relation we are trying to uncover using spacekime analytics.

### 2.1.1 Kime-phase noise distribution flattening

In many practical applications, part of the dataset (including both cases and features) include valuable information, whereas the rest of the data may include irrelevant, noisy, or disruptive information.

Clearly, we can’t explicitly untangle these two components, however, we do expect that the irrelevant data portion would yield uninformative/unimportant kime-phases, which may be used to estimate the kime-phase noise-level and noise-distribution. Intuitively, if we modify the dataset to flatten the irrelevant kime-phases, the estimates of the corresponding true-signal kime-phases may be more accurate or more representative. We can think of this process as using kime-phase information from some known strong features to improve the kime-phase information of other particular features. Kime-phase noise distribution flattening requires that the kime-phases be good enough to detect the boundaries between the strong-features and the rest.

### 2.1.2 Multi-sample Kime-Phase Averaging

It’s natural to assume that multiple instances of the same process would yield similar analytics and inference results. For large datasets, we can use ensemble methods (e.g., SuperLearner, and CBDA) to iteratively generate independent samples, which would be expected to lead to analogous kime-phase estimated and analytical results. This, we expect that when salient features are extracted by spacekime analytics based on independent samples, their kime-phase estimates should be highly associated (e.g., correlated), albeit perhaps not identical. However, weak features would exhibit exactly the opposite effect - their kime-phases may be highly variable (noisy). By averaging the kime-phases, noisy-areas in the dataset may cancel out, whereas, patches of strong-signal may preserve the kime-phase details, which would lead to increased kime forecasting accuracy and reproducibility of the kime analytics.

### 2.1.3 Histogram equalization

As common experimental designs and similar datasets exhibit analogous characteristics, the corresponding spacekime analytics are also expected to be synergistic. Spacekime inference that does not yield results in some controlled or expected range, may be indicative of incorrect kime-phase estimation. We can use histogram equalization methods to improve the kime-phase estimates. This may be accomplished by altering the distribution of kime-phases to either match the phase distribution of other similar experimental designs or generate more expected spacekime analytical results.

### 2.1.4 Fitting and refinement

Related to energy modificaiton strategies, the fitting and refinement technique capitalizes on the fact that strong energy datasets tend to have a smaller set of salient features. So, if we construct case-studies with some strong features, the corresponding kime-phases will be more accurate, and the resulting inference/analytics will be more powerful and highly reproducible. Various classification, regression, supervised and unsupervised methods, and other model-based techniques allow us to both fit a model (estimate coefficients and structure) as well as apply the model for outcome predictions and forecasting. Such models permit control over the characteristics of individual features and multi-variate inter-relations, which can be can be exploited to gather valuable kime-phase information. Starting with a reasonable guess (kime-phase prior), the fitting and refinement technique can be applied iteratively to (1) reconstructing the data into spacetime using the kime-phase estimates, (2) fit or estimate the spacekime analytical model, (3) compare the analytical results and inference to expected outcomes, and (4) refine the kime-phase estimator aiming to gain better outcomes (#3). Indeed, other energy modificaiton strategies (e.g., averaging or flattening) can be applied before a new iteration to build a new model is initiated (#1 and #2).

## 2.2 Data Source Type

library(EBImage)
library(TCIU)
square_arr <- matrix(nrow=256, ncol=256)
circle_arr <- matrix(nrow=256, ncol=256)

for (i in 1:256) {
for (j in 1:256) {
if ( abs(i-128) < 30 && abs(j-128) < 30)
square_arr[i,j]=1 # sqrt((i-128)^2+(j-128)^2)/30
else square_arr[i,j]=0
if ( sqrt((i-128)^2 + (j-128)^2)<30)
circle_arr[i,j]=1 # 1-sqrt((i-128)^2+(j-128)^2)/30
else circle_arr[i,j]=0
}
}

## 2.3 Figure 3.6A

#image(square_arr); image(circle_arr)
display(square_arr, method = "raster") # display(circle_arr, method = "raster")

## quartz_off_screen
##                 2
X1 = fft(square_arr)
X1_mag <- sqrt(Re(X1)^2+Im(X1)^2); display(fftshift(X1_mag), method = "raster")