ParametricDFT
Documentation for ParametricDFT.
ParametricDFT.AbstractLossParametricDFT.AbstractSparseBasisParametricDFT.BasisJSONParametricDFT.CompressedImageParametricDFT.CompressedImageJSONParametricDFT.EntangledBasisJSONParametricDFT.EntangledQFTBasisParametricDFT.EntangledQFTBasisParametricDFT.EntangledQFTBasisParametricDFT.L1NormParametricDFT.L2NormParametricDFT.MSELossParametricDFT.QFTBasisParametricDFT.QFTBasisParametricDFT.QFTBasisParametricDFT.TEBDBasisParametricDFT.TEBDBasisParametricDFT.TEBDBasisParametricDFT.TEBDBasisJSONParametricDFT.TrainingHistoryBase.:==Base.:==Base.:==Base.showBase.showBase.showBase.showParametricDFT._basis_to_jsonParametricDFT._basis_to_jsonParametricDFT._basis_to_jsonParametricDFT._build_manual_qftParametricDFT._compute_validation_lossParametricDFT._json_to_basisParametricDFT._json_to_entangled_basisParametricDFT._json_to_tebd_basisParametricDFT._reconstruct_frequency_domainParametricDFT._save_loss_historyParametricDFT._select_top_coefficientsParametricDFT._train_basis_coreParametricDFT._train_on_single_imageParametricDFT.basis_hashParametricDFT.basis_hashParametricDFT.basis_hashParametricDFT.basis_to_dictParametricDFT.basis_to_dictParametricDFT.basis_to_dictParametricDFT.compressParametricDFT.compress_with_kParametricDFT.compression_statsParametricDFT.dict_to_basisParametricDFT.ema_smoothParametricDFT.entangled_qft_codeParametricDFT.entanglement_gateParametricDFT.extract_entangle_phasesParametricDFT.extract_tebd_phasesParametricDFT.fft_with_trainingParametricDFT.forward_transformParametricDFT.forward_transformParametricDFT.forward_transformParametricDFT.forward_transformParametricDFT.ft_matParametricDFT.generate_manifoldParametricDFT.get_entangle_phasesParametricDFT.get_entangle_tensor_indicesParametricDFT.get_manifoldParametricDFT.get_manifoldParametricDFT.get_manifoldParametricDFT.get_phasesParametricDFT.get_tebd_gate_indicesParametricDFT.ift_matParametricDFT.image_sizeParametricDFT.image_sizeParametricDFT.image_sizeParametricDFT.inverse_transformParametricDFT.inverse_transformParametricDFT.inverse_transformParametricDFT.inverse_transformParametricDFT.load_basisParametricDFT.load_compressedParametricDFT.load_loss_historyParametricDFT.loss_functionParametricDFT.n_col_gatesParametricDFT.n_row_gatesParametricDFT.n_total_gatesParametricDFT.num_entangle_parametersParametricDFT.num_gatesParametricDFT.num_parametersParametricDFT.num_parametersParametricDFT.num_parametersParametricDFT.plot_training_comparisonParametricDFT.plot_training_comparison_stepsParametricDFT.plot_training_gridParametricDFT.plot_training_lossParametricDFT.plot_training_loss_stepsParametricDFT.point2tensorsParametricDFT.qft_codeParametricDFT.recoverParametricDFT.save_basisParametricDFT.save_compressedParametricDFT.save_loss_historyParametricDFT.save_training_plotsParametricDFT.tebd_codeParametricDFT.tensors2pointParametricDFT.topk_truncateParametricDFT.train_basisParametricDFT.train_basisParametricDFT.train_basisParametricDFT.train_basis_from_files
ParametricDFT.AbstractLoss — Type
AbstractLossAbstract base type for loss functions. Custom loss functions should inherit from this type and implement _loss_function(fft_result, input, loss).
ParametricDFT.AbstractSparseBasis — Type
AbstractSparseBasisAbstract base type for sparse basis representations. All concrete basis types should inherit from this and implement the required interface:
Required methods:
forward_transform(basis, image)- transform image to frequency domaininverse_transform(basis, freq_domain)- inverse transformimage_size(basis)- return supported image dimensionsnum_parameters(basis)- return total parameter countbasis_hash(basis)- return unique hash for basis identification
ParametricDFT.BasisJSON — Type
BasisJSONInternal struct for JSON serialization of QFTBasis.
ParametricDFT.CompressedImage — Type
CompressedImageSparse representation of an image in the frequency domain.
Fields
indices::Vector{Int}: Linear indices of non-zero coefficientsvalues_real::Vector{Float64}: Real parts of coefficient valuesvalues_imag::Vector{Float64}: Imaginary parts of coefficient valuesoriginal_size::Tuple{Int,Int}: Original image dimensions (height, width)basis_hash::String: Hash of the basis used for compression (for verification)
Note
The compressed representation stores only the non-zero coefficients after truncation, achieving compression by discarding small coefficients.
ParametricDFT.CompressedImageJSON — Type
CompressedImageJSONInternal struct for JSON serialization of CompressedImage.
ParametricDFT.EntangledBasisJSON — Type
EntangledBasisJSONInternal struct for JSON serialization of EntangledQFTBasis.
ParametricDFT.EntangledQFTBasis — Type
EntangledQFTBasis <: AbstractSparseBasisEntangled Quantum Fourier Transform basis with XY correlation.
This basis extends the standard QFT by adding entanglement gates E_k between corresponding row and column qubits. Each entanglement gate has the same form as the M gate in QFT:
E_k = diag(1, 1, 1, e^(i*phi_k))acting on qubits (x{n-k}, y{n-k}), where phi_k is a learnable phase parameter.
Fields
m::Int: Number of qubits for row dimension (image height = 2^m)n::Int: Number of qubits for column dimension (image width = 2^n)tensors::Vector: Circuit parameters (unitary matrices + entanglement gates)optcode::AbstractEinsum: Optimized einsum code for forward transforminverse_code::AbstractEinsum: Optimized einsum code for inverse transformn_entangle::Int: Number of entanglement gates (= min(m, n))entangle_phases::Vector{Float64}: Phase parameters for entanglement gates
Example
# Create default entangled QFT basis for 64×64 images
basis = EntangledQFTBasis(6, 6)
# Create with custom initial entanglement phases
phases = rand(6) * 2π
basis = EntangledQFTBasis(6, 6; entangle_phases=phases)
# Transform an image
freq = forward_transform(basis, image)
# Inverse transform
reconstructed = inverse_transform(basis, freq)ParametricDFT.EntangledQFTBasis — Method
EntangledQFTBasis(m::Int, n::Int, tensors::Vector, n_entangle::Int; entangle_position=:back)Construct an EntangledQFTBasis with custom trained tensors.
Arguments
m::Int: Number of qubits for rowsn::Int: Number of qubits for columnstensors::Vector: Pre-trained circuit parametersn_entangle::Int: Number of entanglement gatesentangle_position::Symbol: Where entanglement gates are placed (:front, :middle, :back)
Returns
EntangledQFTBasis: Basis with custom parameters
ParametricDFT.EntangledQFTBasis — Method
EntangledQFTBasis(m::Int, n::Int; entangle_phases=nothing, entangle_position=:back)Construct an EntangledQFTBasis with default or custom entanglement phases.
Arguments
m::Int: Number of qubits for rows (image height = 2^m)n::Int: Number of qubits for columns (image width = 2^n)entangle_phases::Union{Nothing, Vector{<:Real}}: Initial phases for entanglement gates. If nothing, defaults to zeros (equivalent to standard QFT initially).entangle_position::Symbol: Where to place entanglement gates (:front, :middle, :back)
Returns
EntangledQFTBasis: Basis with entangled QFT circuit parameters
ParametricDFT.L1Norm — Type
L1Norm <: AbstractLossL1 norm loss: minimizes sum of absolute values in the transformed domain. This encourages sparsity in the frequency representation.
ParametricDFT.L2Norm — Type
L2Norm <: AbstractLossL2 norm loss: minimizes sum of squared magnitudes in the transformed domain. This encourages energy concentration (squared magnitude) with smoother gradients compared to L1 norm, and less aggressive sparsity promotion than L1.
ParametricDFT.MSELoss — Type
MSELoss <: AbstractLossMean Squared Error loss with truncation: minimizes reconstruction error after forward transform, truncation (keeping top k elements), and inverse transform.
Fields
k::Int: Number of top elements to keep after truncation (by magnitude)
Equation
L(θ) = Σᵢ ||xᵢ - T(θ)⁻¹(truncate(T(θ)(xᵢ), k))||²₂
This loss encourages the circuit to learn a representation where the top k frequency components capture most of the signal information.
ParametricDFT.QFTBasis — Type
QFTBasis <: AbstractSparseBasisQuantum Fourier Transform basis using tensor network representation.
Fields
m::Int: Number of qubits for row dimension (image height = 2^m)n::Int: Number of qubits for column dimension (image width = 2^n)tensors::Vector: Circuit parameters (unitary matrices)optcode::AbstractEinsum: Optimized einsum code for forward transforminverse_code::AbstractEinsum: Optimized einsum code for inverse transform
Example
# Create default QFT basis for 64×64 images
basis = QFTBasis(6, 6)
# Transform an image
freq = forward_transform(basis, image)
# Inverse transform
reconstructed = inverse_transform(basis, freq)ParametricDFT.QFTBasis — Method
QFTBasis(m::Int, n::Int, tensors::Vector)Construct a QFTBasis with custom trained tensors.
Arguments
m::Int: Number of qubits for rowsn::Int: Number of qubits for columnstensors::Vector: Pre-trained circuit parameters
Returns
QFTBasis: Basis with custom parameters
ParametricDFT.QFTBasis — Method
QFTBasis(m::Int, n::Int)Construct a QFTBasis with default QFT circuit parameters.
Arguments
m::Int: Number of qubits for rows (image height = 2^m)n::Int: Number of qubits for columns (image width = 2^n)
Returns
QFTBasis: Basis with standard QFT circuit parameters
ParametricDFT.TEBDBasis — Type
TEBDBasis <: AbstractSparseBasisTime-Evolving Block Decimation (TEBD) basis with 2D ring topology.
This basis uses m row qubits and n column qubits with two separate rings:
- Row ring: (x1,x2), (x2,x3), ..., (x{m-1},xm), (x_m,x1) for m gates
- Column ring: (y1,y2), (y2,y3), ..., (y{n-1},yn), (y_n,y1) for n gates
Fields
m::Int: Number of row qubits (row dimension = 2^m)n::Int: Number of column qubits (col dimension = 2^n)tensors::Vector: Circuit parameters (TEBD gate tensors)optcode::AbstractEinsum: Optimized einsum code for forward transforminverse_code::AbstractEinsum: Optimized einsum code for inverse transformn_row_gates::Int: Number of row ring phase gates (= m)n_col_gates::Int: Number of column ring phase gates (= n)phases::Vector{Float64}: Phase parameters for TEBD gates
Example
# Create default TEBD basis for 8×8 images (m=3, n=3)
basis = TEBDBasis(3, 3)
# Create with custom initial phases (6 gates total: 3 row ring + 3 col ring)
phases = rand(6) * 2π
basis = TEBDBasis(3, 3; phases=phases)
# Transform an image
freq = forward_transform(basis, image)
# Inverse transform
reconstructed = inverse_transform(basis, freq)ParametricDFT.TEBDBasis — Method
TEBDBasis(m::Int, n::Int, tensors::Vector, n_row_gates::Int, n_col_gates::Int)Construct a TEBDBasis with custom trained tensors.
Arguments
m::Int: Number of row qubitsn::Int: Number of column qubitstensors::Vector: Pre-trained circuit parametersn_row_gates::Int: Number of row ring gatesn_col_gates::Int: Number of column ring gates
Returns
TEBDBasis: Basis with custom parameters
ParametricDFT.TEBDBasis — Method
TEBDBasis(m::Int, n::Int; phases=nothing)Construct a TEBDBasis with default or custom phases.
Arguments
m::Int: Number of row qubits (row dimension = 2^m)n::Int: Number of column qubits (col dimension = 2^n)phases::Union{Nothing, Vector{<:Real}}: Initial phases for TEBD gates. If nothing, defaults to zeros. Length must be m+n for ring topology.
Returns
TEBDBasis: Basis with TEBD circuit parameters
ParametricDFT.TEBDBasisJSON — Type
TEBDBasisJSONInternal struct for JSON serialization of TEBDBasis.
ParametricDFT.TrainingHistory — Type
TrainingHistoryStores the training history including losses per epoch and per step.
Fields
train_losses::Vector{Float64}: Average training loss per epochval_losses::Vector{Float64}: Validation loss per epochstep_train_losses::Vector{Float64}: Training loss per step (per image processed)basis_name::String: Name of the basis being trained
ParametricDFT._basis_to_json — Method
_basis_to_json(basis::EntangledQFTBasis)Convert an EntangledQFTBasis to JSON-serializable format.
ParametricDFT._basis_to_json — Method
_basis_to_json(basis::QFTBasis)Convert a QFTBasis to JSON-serializable format.
ParametricDFT._basis_to_json — Method
_basis_to_json(basis::TEBDBasis)Convert a TEBDBasis to JSON-serializable format.
ParametricDFT._build_manual_qft — Method
_build_manual_qft(n_qubits, qubit_offset, total_qubits)Build a manual QFT gate chain (without using EasyBuild.qftcircuit) that returns individual gate operations. This is needed for the :middle entangleposition mode where entanglement gates are interleaved with QFT Hadamard gates.
The standard QFT for n qubits consists of:
- For qubit j = 1, ..., n:
- H(j)
- For target k = j+1, ..., n: ctrl(k → j, phase=2π/2^(k-j+1))
Arguments
n_qubits::Int: Number of qubits in this QFT blockqubit_offset::Int: Offset to add to qubit indices (0 for row, m for col)total_qubits::Int: Total number of qubits in the full circuit
Returns
Vector{AbstractBlock}: Individual gate operations in order
ParametricDFT._compute_validation_loss — Method
_compute_validation_loss(validation_data, tensors, optcode, inverse_code, m, n, loss)Compute average loss over validation set.
ParametricDFT._json_to_basis — Method
_json_to_basis(json_data::BasisJSON)Convert JSON data back to a QFTBasis object.
ParametricDFT._json_to_entangled_basis — Method
_json_to_entangled_basis(json_data::EntangledBasisJSON)Convert JSON data back to an EntangledQFTBasis object.
ParametricDFT._json_to_tebd_basis — Method
_json_to_tebd_basis(json_data::TEBDBasisJSON)Convert JSON data back to a TEBDBasis object.
ParametricDFT._reconstruct_frequency_domain — Method
_reconstruct_frequency_domain(compressed::CompressedImage, size::Tuple{Int,Int})Reconstruct full frequency domain matrix from sparse representation.
ParametricDFT._save_loss_history — Method
_save_loss_history(path, basis_name, loss_records, train_losses, val_losses)Internal function to save training loss history to a JSON file.
ParametricDFT._select_top_coefficients — Method
_select_top_coefficients(freq_domain::AbstractMatrix, k::Int)Select top k coefficients using frequency-weighted magnitude scoring.
Low-frequency components (near center) are prioritized as they typically contain more important structural information.
ParametricDFT._train_basis_core — Method
_train_basis_core(dataset, optcode, inverse_code, initial_tensors, m, n, loss,
epochs, steps_per_image, validation_split, shuffle,
early_stopping_patience, verbose, basis_name; extra_info="",
save_loss_path=nothing)Core training loop shared by all basis types. Returns (finaltensors, bestvalloss, trainlosses, vallosses, steptrain_losses).
If save_loss_path is provided, saves per-step training losses to a JSON file with epoch, step, and loss fields for each entry.
ParametricDFT._train_on_single_image — Method
_train_on_single_image(img_matrix, theta, M, optcode, inverse_code, m, n, loss, steps)Train on a single image using gradient descent.
ParametricDFT.basis_hash — Method
basis_hash(basis::EntangledQFTBasis)Compute a unique hash identifying this basis configuration and parameters.
Returns
String: SHA-256 hash of the basis parameters
ParametricDFT.basis_hash — Method
basis_hash(basis::QFTBasis)Compute a unique hash identifying this basis configuration and parameters.
Returns
String: SHA-256 hash of the basis parameters
ParametricDFT.basis_hash — Method
basis_hash(basis::TEBDBasis)Compute a unique hash identifying this basis configuration and parameters.
Returns
String: SHA-256 hash of the basis parameters
ParametricDFT.basis_to_dict — Method
basis_to_dict(basis::EntangledQFTBasis) -> DictConvert an EntangledQFTBasis to a dictionary for custom serialization.
Returns
Dict: Dictionary representation of the basis
ParametricDFT.basis_to_dict — Method
basis_to_dict(basis::AbstractSparseBasis) -> DictConvert a basis to a dictionary for custom serialization.
Returns
Dict: Dictionary representation of the basis
ParametricDFT.basis_to_dict — Method
basis_to_dict(basis::TEBDBasis) -> DictConvert a TEBDBasis to a dictionary for custom serialization.
Returns
Dict: Dictionary representation of the basis
ParametricDFT.compress — Method
compress(basis::AbstractSparseBasis, image::AbstractMatrix; ratio::Float64=0.9)Compress an image using the given sparse basis.
Arguments
basis::AbstractSparseBasis: The trained basis to use for compressionimage::AbstractMatrix: Input image (must match basis dimensions)ratio::Float64 = 0.9: Compression ratio (0.9 means keep only 10% of coefficients)
Returns
CompressedImage: Sparse representation of the image
Example
basis = load_basis("trained_basis.json")
image = load_grayscale_image("photo.png")
compressed = compress(basis, image; ratio=0.95) # Keep top 5%
save_compressed("photo.cimg", compressed)ParametricDFT.compress_with_k — Method
compress_with_k(basis::AbstractSparseBasis, image::AbstractMatrix; k::Int)Compress an image keeping exactly k coefficients.
Arguments
basis::AbstractSparseBasis: The trained basis to useimage::AbstractMatrix: Input imagek::Int: Exact number of coefficients to keep
Returns
CompressedImage: Sparse representation
ParametricDFT.compression_stats — Method
compression_stats(compressed::CompressedImage) -> NamedTupleGet statistics about the compression.
Returns
A named tuple with:
original_size: Original image dimensionstotal_coefficients: Total number of coefficientskept_coefficients: Number of non-zero coefficients keptcompression_ratio: Ratio of discarded coefficientsstorage_reduction: Approximate storage reduction factor
ParametricDFT.dict_to_basis — Method
dict_to_basis(d::Dict) -> AbstractSparseBasisConvert a dictionary back to a basis.
Arguments
d::Dict: Dictionary with basis data
Returns
AbstractSparseBasis: The reconstructed basis
ParametricDFT.ema_smooth — Method
ema_smooth(values::Vector{Float64}, alpha::Float64) -> Vector{Float64}Compute exponential moving average for smoothing noisy loss curves.
Arguments
values::Vector{Float64}: Raw values to smoothalpha::Float64: Smoothing factor in (0, 1). Higher values = more smoothing. Common values: 0.6 (light), 0.9 (heavy), 0.95 (very heavy).
Returns
Vector{Float64}: Smoothed values (same length as input)
ParametricDFT.entangled_qft_code — Method
entangled_qft_code(m::Int, n::Int; entangle_phases=nothing, inverse=false, entangle_position=:back)Generate an optimized tensor network representation of the entangled QFT circuit.
The entangled QFT extends the standard 2D QFT by adding entanglement gates Ek between corresponding row and column qubits. Each entanglement gate Ek is a controlled-phase gate with the same structure as the M gate in QFT:
Full gate form (4×4 matrix): Ek = diag(1, 1, 1, e^(i*phik))
Tensor network form (2×2 matrix): Ektensor = [1 0; 0 e^(i*phi_k)]
The gate acts on qubits (x{n-k}, y{n-k}), where phi_k is a learnable phase parameter. In the tensor network, these are represented as 2×2 matrices.
For a square 2^n × 2^n image (m = n), we add exactly n entanglement gates, one for each pair of corresponding row/column qubits.
Arguments
m::Int: Number of qubits for row indices (image height = 2^m)n::Int: Number of qubits for column indices (image width = 2^n)entangle_phases::Union{Nothing, Vector{<:Real}}: Initial phases for entanglement gates. If nothing, defaults to zeros (equivalent to standard QFT). Length must equal min(m, n).inverse::Bool: If true, generate inverse transform codeentangle_position::Symbol: Where to place entanglement gates. One of::back(default): QFTrow ⊗ QFTcol → Entangle:front: Entangle → QFTrow ⊗ QFTcol:middle: Row and column QFT interleaved, with Ek placed after the row H(j) but BEFORE the column H(j). This produces a distinct result because Ek (a diagonal controlled-phase gate) does not commute with Hadamard gates.
Returns
optcode::AbstractEinsum: Optimized einsum contraction codetensors::Vector: Circuit parameters (unitary matrices + entanglement gates)n_entangle::Int: Number of entanglement gates added
Example
# Create entangled QFT for 64×64 images with default (zero) phases
optcode, tensors, n_entangle = entangled_qft_code(6, 6)
# Create with custom initial phases
phases = rand(6) * 2π
optcode, tensors, n_entangle = entangled_qft_code(6, 6; entangle_phases=phases)
# Create with entanglement at front
optcode, tensors, n_entangle = entangled_qft_code(6, 6; entangle_position=:front)
# Create with entanglement interleaved in middle
optcode, tensors, n_entangle = entangled_qft_code(6, 6; entangle_position=:middle)ParametricDFT.entanglement_gate — Method
entanglement_gate(phi::Real)Create the tensor network representation of a 2-qubit controlled-phase gate E with learnable phase phi.
Full gate form (4×4 matrix in computational basis): E = diag(1, 1, 1, e^(i*phi))
This applies phase e^(i*phi) only when both qubits are in state |1⟩.
Tensor network form (2×2 matrix): E_tensor = [1 0; 0 e^(i*phi)]
In the einsum tensor network decomposition, controlled-phase gates are represented as 2×2 matrices acting on the bond indices connecting the control and target qubits. This function returns the tensor form, not the full 4×4 gate matrix.
This gate has the same structure as the M gate in the QFT circuit, but with a learnable phase parameter instead of the fixed QFT phases (2π/2^k).
Arguments
phi::Real: Phase parameter (in radians)
Returns
Matrix{ComplexF64}: 2×2 matrix in tensor network form
ParametricDFT.extract_entangle_phases — Method
extract_entangle_phases(tensors, entangle_indices::Vector{Int})Extract the phase parameters from entanglement gate tensors.
Arguments
tensors::Vector: Circuit tensorsentangle_indices::Vector{Int}: Indices of entanglement gates
Returns
Vector{Float64}: Phase parameters phi_k for each entanglement gate
ParametricDFT.extract_tebd_phases — Method
extract_tebd_phases(tensors, gate_indices::Vector{Int})Extract the phase parameters from TEBD gate tensors.
Arguments
tensors::Vector: Circuit tensorsgate_indices::Vector{Int}: Indices of TEBD gates
Returns
Vector{Float64}: Phase parameters for each TEBD gate
ParametricDFT.fft_with_training — Method
fft_with_training(m::Int, n::Int, pic::Matrix, loss::AbstractLoss; steps::Int=1000, use_cuda::Bool=false)Train a parametric 2D quantum DFT circuit using Riemannian gradient descent.
Arguments
m::Int: Number of qubits for row dimension (image height = 2^m)n::Int: Number of qubits for column dimension (image width = 2^n)pic::Matrix: Input signal (size must be 2^m × 2^n)loss::AbstractLoss: Loss function (e.g.,L1Norm())steps::Int=1000: Maximum optimization iterationsuse_cuda::Bool=false: Whether to use CUDA acceleration (not yet implemented)
Returns
- Optimized parameters on the manifold (use
point2tensorsto convert to tensors)
Example
m, n = 6, 6 # For 64×64 image
pic = rand(ComplexF64, 2^m, 2^n)
# L1 norm loss
theta = fft_with_training(m, n, pic, L1Norm(); steps=200)
# MSE loss with truncation (keep top 100 elements)
theta = fft_with_training(m, n, pic, MSELoss(100); steps=200)ParametricDFT.forward_transform — Method
forward_transform(basis::EntangledQFTBasis, image::AbstractMatrix)Apply forward entangled QFT transform to convert image to frequency domain.
Arguments
basis::EntangledQFTBasis: The basis to use for transformationimage::AbstractMatrix: Input image (must be size 2^m × 2^n)
Returns
- Frequency domain representation (Complex matrix of same size)
ParametricDFT.forward_transform — Method
forward_transform(basis::QFTBasis, image::AbstractMatrix)Apply forward transform to convert image to frequency domain.
Arguments
basis::QFTBasis: The basis to use for transformationimage::AbstractMatrix: Input image (must be size 2^m × 2^n)
Returns
- Frequency domain representation (Complex matrix of same size)
ParametricDFT.forward_transform — Method
forward_transform(basis::TEBDBasis, image::AbstractMatrix)Apply forward TEBD transform to an image.
Arguments
basis::TEBDBasis: The basis to use for transformationimage::AbstractMatrix: Input image (must be 2^m × 2^n)
Returns
- Transformed representation as matrix (same shape as input)
ParametricDFT.forward_transform — Method
forward_transform(basis::TEBDBasis, data::AbstractVector)Apply forward TEBD transform to a vector.
Arguments
basis::TEBDBasis: The basis to use for transformationdata::AbstractVector: Input vector (must have length 2^(m+n))
Returns
- Transformed representation (Complex vector of same length)
ParametricDFT.ft_mat — Method
ft_mat(tensors::Vector, code::AbstractEinsum, m::Int, n::Int, pic::Matrix)Apply 2D DFT to an image using the trained circuit parameters.
Arguments
tensors::Vector: Circuit tensors (unitary matrices)code::AbstractEinsum: Optimized einsum codem::Int: Number of qubits for row dimensionn::Int: Number of qubits for column dimensionpic::Matrix: Input image (size 2^m × 2^n)
Returns
- Transformed image in frequency domain (size 2^m × 2^n)
ParametricDFT.generate_manifold — Method
generate_manifold(tensors)Generate product manifold for m+n-qubit QFT parameters. Returns a product of U(2) manifolds for Hadamard gates and U(1)^4 for controlled gates.
ParametricDFT.get_entangle_phases — Method
get_entangle_phases(basis::EntangledQFTBasis)Get the current entanglement phase parameters.
Returns
Vector{Float64}: Phase parameters phi_k for each entanglement gate
ParametricDFT.get_entangle_tensor_indices — Method
get_entangle_tensor_indices(tensors, n_entangle::Int)Identify which tensors in the circuit correspond to entanglement gates. Returns the indices of the entanglement gate tensors (the last n_entangle controlled-phase gates).
In the tensor network representation from yao2einsum, controlled-phase gates have the form [1, 1; 1, e^(i*phi)] (not diagonal). The entanglement gates are the last n_entangle such tensors after sorting (Hadamards first, then phase gates).
After training, tensors may drift slightly from the exact pattern, so we use tolerance-based magnitude checks.
Arguments
tensors::Vector: Circuit tensorsn_entangle::Int: Number of entanglement gates
Returns
Vector{Int}: Indices of entanglement gate tensors
ParametricDFT.get_manifold — Method
get_manifold(basis::EntangledQFTBasis)Get the product manifold for Riemannian optimization of basis parameters.
Returns
ProductManifold: Manifold structure for the tensors
ParametricDFT.get_manifold — Method
get_manifold(basis::QFTBasis)Get the product manifold for Riemannian optimization of basis parameters.
Returns
ProductManifold: Manifold structure for the tensors
ParametricDFT.get_manifold — Method
get_manifold(basis::TEBDBasis)Get the product manifold for Riemannian optimization of basis parameters.
Returns
ProductManifold: Manifold structure for the tensors
ParametricDFT.get_phases — Method
get_phases(basis::TEBDBasis)Get the current phase parameters.
Returns
Vector{Float64}: Phase parameters for each TEBD gate
ParametricDFT.get_tebd_gate_indices — Method
get_tebd_gate_indices(tensors, n_gates::Int)Identify which tensors correspond to TEBD gates.
Arguments
tensors::Vector: Circuit tensorsn_gates::Int: Number of TEBD gates
Returns
Vector{Int}: Indices of TEBD gate tensors
ParametricDFT.ift_mat — Method
ift_mat(tensors::Vector, code::AbstractEinsum, m::Int, n::Int, pic::Matrix)Apply inverse 2D DFT using the inverse QFT circuit with trained parameters.
Arguments
tensors::Vector: Circuit tensors (unitary matrices) from inverse QFT (use qft_code(m, n; inverse=true))code::AbstractEinsum: Optimized einsum code from inverse QFTm::Int: Number of qubits for row dimensionn::Int: Number of qubits for column dimensionpic::Matrix: Input in frequency domain (size 2^m × 2^n)
Returns
- Transformed image in spatial domain (size 2^m × 2^n)
ParametricDFT.image_size — Method
image_size(basis::EntangledQFTBasis)Return the supported image dimensions for this basis.
Returns
Tuple{Int,Int}: (height, width) = (2^m, 2^n)
ParametricDFT.image_size — Method
image_size(basis::QFTBasis)Return the supported image dimensions for this basis.
Returns
Tuple{Int,Int}: (height, width) = (2^m, 2^n)
ParametricDFT.image_size — Method
image_size(basis::TEBDBasis)Return the supported image dimensions for this basis.
Returns
Tuple{Int,Int}: (height, width) = (2^m, 2^n)
ParametricDFT.inverse_transform — Method
inverse_transform(basis::EntangledQFTBasis, freq_domain::AbstractMatrix)Apply inverse entangled QFT transform to convert frequency domain back to image.
Arguments
basis::EntangledQFTBasis: The basis to use for transformationfreq_domain::AbstractMatrix: Frequency domain data (size 2^m × 2^n)
Returns
- Reconstructed image (Complex matrix of same size)
ParametricDFT.inverse_transform — Method
inverse_transform(basis::QFTBasis, freq_domain::AbstractMatrix)Apply inverse transform to convert frequency domain back to image.
Arguments
basis::QFTBasis: The basis to use for transformationfreq_domain::AbstractMatrix: Frequency domain data (size 2^m × 2^n)
Returns
- Reconstructed image (Complex matrix of same size)
ParametricDFT.inverse_transform — Method
inverse_transform(basis::TEBDBasis, freq_domain::AbstractMatrix)Apply inverse TEBD transform to a matrix.
Arguments
basis::TEBDBasis: The basis to use for transformationfreq_domain::AbstractMatrix: Frequency domain data (must be 2^m × 2^n)
Returns
- Reconstructed data as matrix (same shape as input)
ParametricDFT.inverse_transform — Method
inverse_transform(basis::TEBDBasis, freq_domain::AbstractVector)Apply inverse TEBD transform to convert back to original domain.
Arguments
basis::TEBDBasis: The basis to use for transformationfreq_domain::AbstractVector: Frequency domain data (length 2^(m+n))
Returns
- Reconstructed data (Complex vector of same length)
ParametricDFT.load_basis — Method
load_basis(path::String) -> AbstractSparseBasisLoad a sparse basis from a JSON file.
Arguments
path::String: Path to the JSON file
Returns
AbstractSparseBasis: The loaded basis (concrete type depends on file contents)
Example
basis = load_basis("trained_basis.json")
freq = forward_transform(basis, image)ParametricDFT.load_compressed — Method
load_compressed(path::String) -> CompressedImageLoad a compressed image from a JSON file.
Arguments
path::String: Path to the compressed image file
Returns
CompressedImage: The loaded compressed image
ParametricDFT.load_loss_history — Method
load_loss_history(path::String) -> TrainingHistoryLoad training loss history from a JSON file (saved by save_loss_history or save_loss_path) and return a TrainingHistory object that can be passed directly to plotting functions.
Arguments
path::String: Path to the JSON file
Returns
TrainingHistory: Object withtrain_losses,val_losses,step_train_losses, andbasis_name
Example
using ParametricDFT
history = load_loss_history("training_loss.json")
fig = plot_training_loss(history)
save("loss_curve.png", fig)ParametricDFT.loss_function — Method
loss_function(tensors, m::Int, n::Int, optcode::AbstractEinsum, pic::Matrix, loss::AbstractLoss; inverse_code=nothing)Compute the loss for current circuit parameters.
Arguments
tensors::Vector: Circuit parameters (unitary matrices)m::Int: Number of qubits for row dimensionn::Int: Number of qubits for column dimensionoptcode::AbstractEinsum: Optimized einsum code (forward transform)pic::Matrix: Input signal (size must be 2^m × 2^n)loss::AbstractLoss: Loss function typeinverse_code::AbstractEinsum: Optional inverse einsum code (required for MSELoss)
Returns
- Loss value
ParametricDFT.n_col_gates — Method
n_col_gates(n::Int)Calculate number of column ring phase gates for n column qubits.
ParametricDFT.n_row_gates — Method
n_row_gates(m::Int)Calculate number of row ring phase gates for m row qubits.
ParametricDFT.n_total_gates — Method
n_total_gates(m::Int, n::Int)Calculate total number of phase gates for m×n TEBD circuit.
ParametricDFT.num_entangle_parameters — Method
num_entangle_parameters(basis::EntangledQFTBasis)Return the number of entanglement phase parameters.
Returns
Int: Number of entanglement gates (= min(m, n))
ParametricDFT.num_gates — Method
num_gates(basis::TEBDBasis)Return the total number of TEBD gates.
Returns
Int: Number of gates (= m + n for ring topology)
ParametricDFT.num_parameters — Method
num_parameters(basis::EntangledQFTBasis)Return the total number of learnable parameters in the basis.
For EntangledQFTBasis:
- Standard QFT parameters (Hadamard gates + M gates)
- Additional n entanglement gate phases (one per qubit pair)
Returns
Int: Total parameter count
ParametricDFT.num_parameters — Method
num_parameters(basis::QFTBasis)Return the total number of learnable parameters in the basis.
For QFT basis:
- n Hadamard gates: 4n parameters each (2×2 unitary)
- n(n-1)/2 controlled-phase gates: 4 parameters each (diagonal 2×2)
Returns
Int: Total parameter count
ParametricDFT.num_parameters — Method
num_parameters(basis::TEBDBasis)Return the total number of learnable parameters in the basis.
Returns
Int: Total parameter count
ParametricDFT.plot_training_comparison — Method
plot_training_comparison(histories::Vector{TrainingHistory}; kwargs...)Plot training loss curves for multiple bases on the same plot for comparison.
Arguments
histories::Vector{TrainingHistory}: Vector of training histories to compare
Keyword Arguments
title::String: Plot title (default: "Training Loss Comparison")xlabel::String: X-axis label (default: "Epoch")ylabel::String: Y-axis label (default: "Loss")yscale::Function: Y-axis scale function (default: log10)size::Tuple{Int,Int}: Figure size (default: (1000, 600))loss_type::Symbol: Type of loss to plot (:train, :validation, or :both, default: :both)
Returns
Makie.Figure: The generated figure
Example
using ParametricDFT
basis1, hist1 = train_basis(QFTBasis, images; m=5, n=5, epochs=3)
basis2, hist2 = train_basis(TEBDBasis, images; m=5, n=5, epochs=3)
histories = [
TrainingHistory(hist1.train_losses, hist1.val_losses, hist1.step_train_losses, "QFT"),
TrainingHistory(hist2.train_losses, hist2.val_losses, hist2.step_train_losses, "TEBD")
]
fig = plot_training_comparison(histories)
save("training_comparison.png", fig)ParametricDFT.plot_training_comparison_steps — Method
plot_training_comparison_steps(histories::Vector{TrainingHistory}; kwargs...)Plot per-step training loss curves for multiple bases on the same plot.
Arguments
histories::Vector{TrainingHistory}: Vector of training histories to compare
Keyword Arguments
title::String: Plot title (default: "Step Training Loss Comparison")xlabel::String: X-axis label (default: "Step")ylabel::String: Y-axis label (default: "Loss")yscale::Function: Y-axis scale function (default: log10)size::Tuple{Int,Int}: Figure size (default: (1000, 600))smoothing::Float64: EMA smoothing factor in (0, 1), 0 disables smoothing (default: 0.0)
Returns
Makie.Figure: The generated figure
Example
using ParametricDFT
histories = [hist_qft, hist_entangled, hist_tebd]
fig = plot_training_comparison_steps(histories; smoothing=0.8)
save("step_comparison.png", fig)ParametricDFT.plot_training_grid — Method
plot_training_grid(histories::Vector{TrainingHistory}; kwargs...)Create a grid of individual training loss plots for multiple bases.
Arguments
histories::Vector{TrainingHistory}: Vector of training histories to plot
Keyword Arguments
title::String: Overall title (default: "Training Loss Comparison")yscale::Function: Y-axis scale function (default: log10)size::Tuple{Int,Int}: Total figure size (default: (1200, 800))layout::Union{Nothing, Tuple{Int,Int}}: Grid layout (default: auto)
Returns
Makie.Figure: The generated grid figure
Example
using ParametricDFT
histories = [hist1, hist2, hist3] # TrainingHistory objects
fig = plot_training_grid(histories, layout=(2, 2))
save("training_grid.png", fig)ParametricDFT.plot_training_loss — Method
plot_training_loss(history::TrainingHistory; kwargs...)Plot training and validation loss curves for a single basis.
Arguments
history::TrainingHistory: Training history to plot
Keyword Arguments
title::String: Plot title (default: "Training Loss - history.basis_name")xlabel::String: X-axis label (default: "Epoch")ylabel::String: Y-axis label (default: "Loss")yscale::Function: Y-axis scale function (default: log10)size::Tuple{Int,Int}: Figure size (default: (800, 500))
Returns
Makie.Figure: The generated figure
Example
using ParametricDFT
basis, history = train_basis(QFTBasis, images; m=5, n=5, epochs=3)
hist = TrainingHistory(history.train_losses, history.val_losses,
history.step_train_losses, "QFT")
fig = plot_training_loss(hist)
save("qft_training_loss.png", fig)ParametricDFT.plot_training_loss_steps — Method
plot_training_loss_steps(history::TrainingHistory; kwargs...)Plot per-step training loss curve for a single basis.
Arguments
history::TrainingHistory: Training history to plot
Keyword Arguments
title::String: Plot title (default: "Step Training Loss - history.basis_name")xlabel::String: X-axis label (default: "Step")ylabel::String: Y-axis label (default: "Loss")yscale::Function: Y-axis scale function (default: log10)size::Tuple{Int,Int}: Figure size (default: (800, 500))smoothing::Float64: EMA smoothing factor in (0, 1), 0 disables smoothing (default: 0.0)
Returns
Makie.Figure: The generated figure
Example
using ParametricDFT
basis, history = train_basis(QFTBasis, images; m=5, n=5, epochs=3)
hist = TrainingHistory(history.train_losses, history.val_losses, history.step_train_losses, "QFT")
fig = plot_training_loss_steps(hist; smoothing=0.8)
save("qft_step_loss.png", fig)ParametricDFT.point2tensors — Method
point2tensors(p, M)Convert a manifold point back to circuit tensors (unitary matrices).
ParametricDFT.qft_code — Method
qft_code(m::Int, n::Int; inverse=false)Generate an optimized tensor network representation of the QFT circuit.
Arguments
m::Int: Number of qubits for row indicesn::Int: Number of qubits for column indicesinverse::Bool: If true, generate inverse transform code
Returns
optcode::AbstractEinsum: Optimized einsum contraction codetensors::Vector: Initial circuit parameters (unitary matrices)
ParametricDFT.recover — Method
recover(basis::AbstractSparseBasis, compressed::CompressedImage; verify_hash::Bool=true)Recover an image from its compressed representation.
Arguments
basis::AbstractSparseBasis: The basis used for compressioncompressed::CompressedImage: The compressed image dataverify_hash::Bool = true: Whether to verify basis hash matches
Returns
Matrix{Float64}: Reconstructed image (real-valued)
Example
basis = load_basis("trained_basis.json")
compressed = load_compressed("photo.cimg")
recovered = recover(basis, compressed)ParametricDFT.save_basis — Method
save_basis(path::String, basis::AbstractSparseBasis)Save a sparse basis to a JSON file.
Arguments
path::String: File path to save to (should end in .json)basis::AbstractSparseBasis: The basis to save
Example
basis = train_basis(QFTBasis, images; m=6, n=6)
save_basis("trained_basis.json", basis)ParametricDFT.save_compressed — Method
save_compressed(path::String, compressed::CompressedImage)Save a compressed image to a JSON file.
Arguments
path::String: File path to save tocompressed::CompressedImage: The compressed image to save
Returns
String: The path where the file was saved
ParametricDFT.save_loss_history — Method
save_loss_history(path::String, history::NamedTuple)Save training loss history returned by train_basis to a JSON file.
The JSON contains:
basis_name: Name of the trained basisepoch_losses: Array of{epoch, train_loss, val_loss}per epochstep_losses: Array of{epoch, step, loss}per training step
Arguments
path::String: Output file path (.json)history::NamedTuple: Training history returned bytrain_basis
Example
basis, history = train_basis(QFTBasis, images; m=5, n=5, epochs=3)
save_loss_history("training_loss.json", history)ParametricDFT.save_training_plots — Method
save_training_plots(histories::Vector{TrainingHistory}, output_dir::String; prefix::String="", smoothing::Float64=0.6)Generate and save all training visualization plots to a directory.
Arguments
histories::Vector{TrainingHistory}: Vector of training historiesoutput_dir::String: Directory to save plots
Keyword Arguments
prefix::String: Prefix for filenames (default: "")smoothing::Float64: EMA smoothing factor for step-level plots (default: 0.6). Set to 0.0 to disable smoothed plots.
Returns
Vector{String}: Paths to saved plot files
Example
using ParametricDFT
histories = [
TrainingHistory(hist1.train_losses, hist1.val_losses, hist1.step_train_losses, "QFT"),
TrainingHistory(hist2.train_losses, hist2.val_losses, hist2.step_train_losses, "TEBD")
]
save_training_plots(histories, "output/"; smoothing=0.8)ParametricDFT.tebd_code — Method
tebd_code(m::Int, n::Int; phases=nothing, inverse=false)Generate an optimized tensor network representation of a 2D TEBD circuit with ring topology.
The TEBD circuit consists of:
- Hadamard layer: H gates on all m+n qubits (creates frequency basis)
- Two separate rings of controlled-phase gates:
- Row ring: (1,2), (2,3), ..., (m-1,m), (m,1) for m gates on row qubits
- Column ring: (m+1,m+2), (m+2,m+3), ..., (m+n-1,m+n), (m+n,m+1) for n gates on column qubits
This creates a 2D separable transform with periodic boundary conditions (ring topology).
Arguments
m::Int: Number of row qubits (row dimension = 2^m)n::Int: Number of column qubits (column dimension = 2^n)phases::Union{Nothing, Vector{<:Real}}: Initial phases for TEBD gates. If nothing, defaults to zeros. Length must equal m+n for ring topology.inverse::Bool: If true, generate inverse transform code
Returns
optcode::AbstractEinsum: Optimized einsum contraction codetensors::Vector: Circuit parameters (Hadamard gates + TEBD gate tensors)n_row_gates::Int: Number of row ring phase gates (= m)n_col_gates::Int: Number of column ring phase gates (= n)
Example
# Create 2D TEBD circuit for 3×3 (m=3 row qubits, n=3 col qubits)
optcode, tensors, n_row, n_col = tebd_code(3, 3)
# Create with custom initial phases (6 gates total: 3 row ring + 3 col ring)
phases = rand(6) * 2π
optcode, tensors, n_row, n_col = tebd_code(3, 3; phases=phases)ParametricDFT.tensors2point — Method
tensors2point(tensors, M::ProductManifold)Convert circuit tensors (unitary matrices) to a point on the product manifold.
ParametricDFT.topk_truncate — Method
topk_truncate(x::AbstractMatrix, k::Integer)Return a matrix where frequency-dependent truncation is applied for image compression. Low-frequency components (near center) are kept with higher priority, while high-frequency components (away from center) are kept with lower priority.
This is more appropriate for image compression than global top-k selection, as it preserves more low-frequency information which contains most of the image structure.
ParametricDFT.train_basis — Method
train_basis(::Type{EntangledQFTBasis}, dataset::Vector{<:AbstractMatrix}; kwargs...)Train an EntangledQFTBasis on a dataset of images using Riemannian gradient descent.
Arguments
::Type{EntangledQFTBasis}: The basis type to traindataset::Vector{<:AbstractMatrix}: Training images (each must be 2^m × 2^n)
Keyword Arguments
m::Int,n::Int: Number of qubits for rows/columnsentangle_phases::Union{Nothing, Vector{<:Real}}: Initial phases (default: zeros)loss,epochs,steps_per_image,validation_split,shuffle,early_stopping_patience,verbose: Same as QFTBasissave_loss_path::Union{Nothing, String} = nothing: Path to save training loss history as JSON
Returns
Tuple{EntangledQFTBasis, NamedTuple}: Trained basis and training historybasis: Trained EntangledQFTBasis with optimized entanglement phaseshistory: NamedTuple with fieldstrain_losses,val_losses,step_train_losses,basis_name
Example
k = round(Int, 64 * 64 * 0.1)
basis, history = train_basis(EntangledQFTBasis, images; m=6, n=6, loss=MSELoss(k), epochs=5)
phases = get_entangle_phases(basis)
println("Training loss per epoch: ", history.train_losses)ParametricDFT.train_basis — Method
train_basis(::Type{QFTBasis}, dataset::Vector{<:AbstractMatrix}; kwargs...)Train a QFTBasis on a dataset of images using Riemannian gradient descent.
Arguments
::Type{QFTBasis}: The basis type to traindataset::Vector{<:AbstractMatrix}: Training images (each must be 2^m × 2^n)
Keyword Arguments
m::Int: Number of qubits for rows (image height = 2^m)n::Int: Number of qubits for columns (image width = 2^n)loss::AbstractLoss = MSELoss(k): Loss function for trainingepochs::Int = 3: Number of training epochssteps_per_image::Int = 200: Gradient descent steps per imagevalidation_split::Float64 = 0.2: Fraction of data for validationshuffle::Bool = true: Whether to shuffle data each epochearly_stopping_patience::Int = 2: Epochs without improvement before stoppingverbose::Bool = true: Whether to print training progresssave_loss_path::Union{Nothing, String} = nothing: Path to save training loss history as JSON
Returns
Tuple{QFTBasis, NamedTuple}: Trained basis and training historybasis: Trained QFTBasis with optimized parametershistory: NamedTuple with fieldstrain_losses,val_losses,step_train_losses,basis_name
Example
images = [load_image(path) for path in image_paths]
k = round(Int, 64 * 64 * 0.1) # Keep 10% of coefficients
basis, history = train_basis(QFTBasis, images; m=6, n=6, loss=MSELoss(k), epochs=5,
save_loss_path="qft_loss.json")
save_basis("trained_basis.json", basis)
println("Final training loss: ", history.train_losses[end])ParametricDFT.train_basis — Method
train_basis(::Type{TEBDBasis}, dataset::Vector{<:AbstractMatrix}; kwargs...)Train a TEBDBasis on a dataset of images using Riemannian gradient descent.
Arguments
::Type{TEBDBasis}: The basis type to traindataset::Vector{<:AbstractMatrix}: Training images (each must be 2^m × 2^n)
Keyword Arguments
m::Int: Number of row qubits (image height = 2^m)n::Int: Number of column qubits (image width = 2^n)phases::Union{Nothing, Vector{<:Real}}: Initial phases for TEBD gates. If nothing, uses small random values (randn * 0.1) to break symmetry. Length must be m+n for ring topology.loss,epochs,steps_per_image,validation_split,shuffle,early_stopping_patience,verbose: Same as QFTBasissave_loss_path::Union{Nothing, String} = nothing: Path to save training loss history as JSON
Returns
Tuple{TEBDBasis, NamedTuple}: Trained basis and training historybasis: Trained TEBDBasis with optimized parametershistory: NamedTuple with fieldstrain_losses,val_losses,step_train_losses,basis_name
Example
k = round(Int, 64 * 64 * 0.1) # Keep 10% of coefficients
basis, history = train_basis(TEBDBasis, images; m=6, n=6, loss=MSELoss(k), epochs=5)
println("Validation loss per epoch: ", history.val_losses)ParametricDFT.train_basis_from_files — Function
train_basis_from_files(::Type{QFTBasis}, image_paths::Vector{String};
target_size::Int, kwargs...)Train a QFTBasis from image files, automatically resizing to power-of-2 dimensions.
Arguments
::Type{QFTBasis}: The basis type to trainimage_paths::Vector{String}: Paths to image files
Keyword Arguments
target_size::Int: Target size (must be power of 2, e.g., 64, 128, 256, 512)kwargs...: Additional arguments passed totrain_basis
Returns
QFTBasis: Trained basis
Note
This function requires the Images.jl package to be loaded.