ParametricDFT

Documentation for ParametricDFT.

ParametricDFT.AbstractLossType
AbstractLoss

Abstract base type for loss functions. Custom loss functions should inherit from this type and implement _loss_function(fft_result, input, loss).

source
ParametricDFT.AbstractSparseBasisType
AbstractSparseBasis

Abstract base type for sparse basis representations. All concrete basis types should inherit from this and implement the required interface:

Required methods:

  • forward_transform(basis, image) - transform image to frequency domain
  • inverse_transform(basis, freq_domain) - inverse transform
  • image_size(basis) - return supported image dimensions
  • num_parameters(basis) - return total parameter count
  • basis_hash(basis) - return unique hash for basis identification
source
ParametricDFT.CompressedImageType
CompressedImage

Sparse representation of an image in the frequency domain.

Fields

  • indices::Vector{Int}: Linear indices of non-zero coefficients
  • values_real::Vector{Float64}: Real parts of coefficient values
  • values_imag::Vector{Float64}: Imaginary parts of coefficient values
  • original_size::Tuple{Int,Int}: Original image dimensions (height, width)
  • basis_hash::String: Hash of the basis used for compression (for verification)

Note

The compressed representation stores only the non-zero coefficients after truncation, achieving compression by discarding small coefficients.

source
ParametricDFT.EntangledQFTBasisType
EntangledQFTBasis <: AbstractSparseBasis

Entangled Quantum Fourier Transform basis with XY correlation.

This basis extends the standard QFT by adding entanglement gates E_k between corresponding row and column qubits. Each entanglement gate has the same form as the M gate in QFT:

E_k = diag(1, 1, 1, e^(i*phi_k))

acting on qubits (x{n-k}, y{n-k}), where phi_k is a learnable phase parameter.

Fields

  • m::Int: Number of qubits for row dimension (image height = 2^m)
  • n::Int: Number of qubits for column dimension (image width = 2^n)
  • tensors::Vector: Circuit parameters (unitary matrices + entanglement gates)
  • optcode::AbstractEinsum: Optimized einsum code for forward transform
  • inverse_code::AbstractEinsum: Optimized einsum code for inverse transform
  • n_entangle::Int: Number of entanglement gates (= min(m, n))
  • entangle_phases::Vector{Float64}: Phase parameters for entanglement gates

Example

# Create default entangled QFT basis for 64×64 images
basis = EntangledQFTBasis(6, 6)

# Create with custom initial entanglement phases
phases = rand(6) * 2π
basis = EntangledQFTBasis(6, 6; entangle_phases=phases)

# Transform an image
freq = forward_transform(basis, image)

# Inverse transform
reconstructed = inverse_transform(basis, freq)
source
ParametricDFT.EntangledQFTBasisMethod
EntangledQFTBasis(m::Int, n::Int, tensors::Vector, n_entangle::Int; entangle_position=:back)

Construct an EntangledQFTBasis with custom trained tensors.

Arguments

  • m::Int: Number of qubits for rows
  • n::Int: Number of qubits for columns
  • tensors::Vector: Pre-trained circuit parameters
  • n_entangle::Int: Number of entanglement gates
  • entangle_position::Symbol: Where entanglement gates are placed (:front, :middle, :back)

Returns

  • EntangledQFTBasis: Basis with custom parameters
source
ParametricDFT.EntangledQFTBasisMethod
EntangledQFTBasis(m::Int, n::Int; entangle_phases=nothing, entangle_position=:back)

Construct an EntangledQFTBasis with default or custom entanglement phases.

Arguments

  • m::Int: Number of qubits for rows (image height = 2^m)
  • n::Int: Number of qubits for columns (image width = 2^n)
  • entangle_phases::Union{Nothing, Vector{<:Real}}: Initial phases for entanglement gates. If nothing, defaults to zeros (equivalent to standard QFT initially).
  • entangle_position::Symbol: Where to place entanglement gates (:front, :middle, :back)

Returns

  • EntangledQFTBasis: Basis with entangled QFT circuit parameters
source
ParametricDFT.L1NormType
L1Norm <: AbstractLoss

L1 norm loss: minimizes sum of absolute values in the transformed domain. This encourages sparsity in the frequency representation.

source
ParametricDFT.L2NormType
L2Norm <: AbstractLoss

L2 norm loss: minimizes sum of squared magnitudes in the transformed domain. This encourages energy concentration (squared magnitude) with smoother gradients compared to L1 norm, and less aggressive sparsity promotion than L1.

source
ParametricDFT.MSELossType
MSELoss <: AbstractLoss

Mean Squared Error loss with truncation: minimizes reconstruction error after forward transform, truncation (keeping top k elements), and inverse transform.

Fields

  • k::Int: Number of top elements to keep after truncation (by magnitude)

Equation

L(θ) = Σᵢ ||xᵢ - T(θ)⁻¹(truncate(T(θ)(xᵢ), k))||²₂

This loss encourages the circuit to learn a representation where the top k frequency components capture most of the signal information.

source
ParametricDFT.QFTBasisType
QFTBasis <: AbstractSparseBasis

Quantum Fourier Transform basis using tensor network representation.

Fields

  • m::Int: Number of qubits for row dimension (image height = 2^m)
  • n::Int: Number of qubits for column dimension (image width = 2^n)
  • tensors::Vector: Circuit parameters (unitary matrices)
  • optcode::AbstractEinsum: Optimized einsum code for forward transform
  • inverse_code::AbstractEinsum: Optimized einsum code for inverse transform

Example

# Create default QFT basis for 64×64 images
basis = QFTBasis(6, 6)

# Transform an image
freq = forward_transform(basis, image)

# Inverse transform
reconstructed = inverse_transform(basis, freq)
source
ParametricDFT.QFTBasisMethod
QFTBasis(m::Int, n::Int, tensors::Vector)

Construct a QFTBasis with custom trained tensors.

Arguments

  • m::Int: Number of qubits for rows
  • n::Int: Number of qubits for columns
  • tensors::Vector: Pre-trained circuit parameters

Returns

  • QFTBasis: Basis with custom parameters
source
ParametricDFT.QFTBasisMethod
QFTBasis(m::Int, n::Int)

Construct a QFTBasis with default QFT circuit parameters.

Arguments

  • m::Int: Number of qubits for rows (image height = 2^m)
  • n::Int: Number of qubits for columns (image width = 2^n)

Returns

  • QFTBasis: Basis with standard QFT circuit parameters
source
ParametricDFT.TEBDBasisType
TEBDBasis <: AbstractSparseBasis

Time-Evolving Block Decimation (TEBD) basis with 2D ring topology.

This basis uses m row qubits and n column qubits with two separate rings:

  • Row ring: (x1,x2), (x2,x3), ..., (x{m-1},xm), (x_m,x1) for m gates
  • Column ring: (y1,y2), (y2,y3), ..., (y{n-1},yn), (y_n,y1) for n gates

Fields

  • m::Int: Number of row qubits (row dimension = 2^m)
  • n::Int: Number of column qubits (col dimension = 2^n)
  • tensors::Vector: Circuit parameters (TEBD gate tensors)
  • optcode::AbstractEinsum: Optimized einsum code for forward transform
  • inverse_code::AbstractEinsum: Optimized einsum code for inverse transform
  • n_row_gates::Int: Number of row ring phase gates (= m)
  • n_col_gates::Int: Number of column ring phase gates (= n)
  • phases::Vector{Float64}: Phase parameters for TEBD gates

Example

# Create default TEBD basis for 8×8 images (m=3, n=3)
basis = TEBDBasis(3, 3)

# Create with custom initial phases (6 gates total: 3 row ring + 3 col ring)
phases = rand(6) * 2π
basis = TEBDBasis(3, 3; phases=phases)

# Transform an image
freq = forward_transform(basis, image)

# Inverse transform
reconstructed = inverse_transform(basis, freq)
source
ParametricDFT.TEBDBasisMethod
TEBDBasis(m::Int, n::Int, tensors::Vector, n_row_gates::Int, n_col_gates::Int)

Construct a TEBDBasis with custom trained tensors.

Arguments

  • m::Int: Number of row qubits
  • n::Int: Number of column qubits
  • tensors::Vector: Pre-trained circuit parameters
  • n_row_gates::Int: Number of row ring gates
  • n_col_gates::Int: Number of column ring gates

Returns

  • TEBDBasis: Basis with custom parameters
source
ParametricDFT.TEBDBasisMethod
TEBDBasis(m::Int, n::Int; phases=nothing)

Construct a TEBDBasis with default or custom phases.

Arguments

  • m::Int: Number of row qubits (row dimension = 2^m)
  • n::Int: Number of column qubits (col dimension = 2^n)
  • phases::Union{Nothing, Vector{<:Real}}: Initial phases for TEBD gates. If nothing, defaults to zeros. Length must be m+n for ring topology.

Returns

  • TEBDBasis: Basis with TEBD circuit parameters
source
ParametricDFT.TrainingHistoryType
TrainingHistory

Stores the training history including losses per epoch and per step.

Fields

  • train_losses::Vector{Float64}: Average training loss per epoch
  • val_losses::Vector{Float64}: Validation loss per epoch
  • step_train_losses::Vector{Float64}: Training loss per step (per image processed)
  • basis_name::String: Name of the basis being trained
source
Base.:==Method
Base.:(==)(a::EntangledQFTBasis, b::EntangledQFTBasis)

Check equality of two EntangledQFTBasis objects.

source
Base.:==Method
Base.:(==)(a::QFTBasis, b::QFTBasis)

Check equality of two QFTBasis objects.

source
Base.:==Method
Base.:(==)(a::TEBDBasis, b::TEBDBasis)

Check equality of two TEBDBasis objects.

source
Base.showMethod
Base.show(io::IO, compressed::CompressedImage)

Pretty print the CompressedImage.

source
Base.showMethod
Base.show(io::IO, basis::EntangledQFTBasis)

Pretty print the EntangledQFTBasis.

source
Base.showMethod
Base.show(io::IO, basis::QFTBasis)

Pretty print the QFTBasis.

source
Base.showMethod
Base.show(io::IO, basis::TEBDBasis)

Pretty print the TEBDBasis.

source
ParametricDFT._build_manual_qftMethod
_build_manual_qft(n_qubits, qubit_offset, total_qubits)

Build a manual QFT gate chain (without using EasyBuild.qftcircuit) that returns individual gate operations. This is needed for the :middle entangleposition mode where entanglement gates are interleaved with QFT Hadamard gates.

The standard QFT for n qubits consists of:

  • For qubit j = 1, ..., n:
    • H(j)
    • For target k = j+1, ..., n: ctrl(k → j, phase=2π/2^(k-j+1))

Arguments

  • n_qubits::Int: Number of qubits in this QFT block
  • qubit_offset::Int: Offset to add to qubit indices (0 for row, m for col)
  • total_qubits::Int: Total number of qubits in the full circuit

Returns

  • Vector{AbstractBlock}: Individual gate operations in order
source
ParametricDFT._select_top_coefficientsMethod
_select_top_coefficients(freq_domain::AbstractMatrix, k::Int)

Select top k coefficients using frequency-weighted magnitude scoring.

Low-frequency components (near center) are prioritized as they typically contain more important structural information.

source
ParametricDFT._train_basis_coreMethod
_train_basis_core(dataset, optcode, inverse_code, initial_tensors, m, n, loss,
                  epochs, steps_per_image, validation_split, shuffle,
                  early_stopping_patience, verbose, basis_name; extra_info="",
                  save_loss_path=nothing)

Core training loop shared by all basis types. Returns (finaltensors, bestvalloss, trainlosses, vallosses, steptrain_losses).

If save_loss_path is provided, saves per-step training losses to a JSON file with epoch, step, and loss fields for each entry.

source
ParametricDFT.basis_hashMethod
basis_hash(basis::EntangledQFTBasis)

Compute a unique hash identifying this basis configuration and parameters.

Returns

  • String: SHA-256 hash of the basis parameters
source
ParametricDFT.basis_hashMethod
basis_hash(basis::QFTBasis)

Compute a unique hash identifying this basis configuration and parameters.

Returns

  • String: SHA-256 hash of the basis parameters
source
ParametricDFT.basis_hashMethod
basis_hash(basis::TEBDBasis)

Compute a unique hash identifying this basis configuration and parameters.

Returns

  • String: SHA-256 hash of the basis parameters
source
ParametricDFT.basis_to_dictMethod
basis_to_dict(basis::EntangledQFTBasis) -> Dict

Convert an EntangledQFTBasis to a dictionary for custom serialization.

Returns

  • Dict: Dictionary representation of the basis
source
ParametricDFT.basis_to_dictMethod
basis_to_dict(basis::AbstractSparseBasis) -> Dict

Convert a basis to a dictionary for custom serialization.

Returns

  • Dict: Dictionary representation of the basis
source
ParametricDFT.basis_to_dictMethod
basis_to_dict(basis::TEBDBasis) -> Dict

Convert a TEBDBasis to a dictionary for custom serialization.

Returns

  • Dict: Dictionary representation of the basis
source
ParametricDFT.compressMethod
compress(basis::AbstractSparseBasis, image::AbstractMatrix; ratio::Float64=0.9)

Compress an image using the given sparse basis.

Arguments

  • basis::AbstractSparseBasis: The trained basis to use for compression
  • image::AbstractMatrix: Input image (must match basis dimensions)
  • ratio::Float64 = 0.9: Compression ratio (0.9 means keep only 10% of coefficients)

Returns

  • CompressedImage: Sparse representation of the image

Example

basis = load_basis("trained_basis.json")
image = load_grayscale_image("photo.png")
compressed = compress(basis, image; ratio=0.95)  # Keep top 5%
save_compressed("photo.cimg", compressed)
source
ParametricDFT.compress_with_kMethod
compress_with_k(basis::AbstractSparseBasis, image::AbstractMatrix; k::Int)

Compress an image keeping exactly k coefficients.

Arguments

  • basis::AbstractSparseBasis: The trained basis to use
  • image::AbstractMatrix: Input image
  • k::Int: Exact number of coefficients to keep

Returns

  • CompressedImage: Sparse representation
source
ParametricDFT.compression_statsMethod
compression_stats(compressed::CompressedImage) -> NamedTuple

Get statistics about the compression.

Returns

A named tuple with:

  • original_size: Original image dimensions
  • total_coefficients: Total number of coefficients
  • kept_coefficients: Number of non-zero coefficients kept
  • compression_ratio: Ratio of discarded coefficients
  • storage_reduction: Approximate storage reduction factor
source
ParametricDFT.dict_to_basisMethod
dict_to_basis(d::Dict) -> AbstractSparseBasis

Convert a dictionary back to a basis.

Arguments

  • d::Dict: Dictionary with basis data

Returns

  • AbstractSparseBasis: The reconstructed basis
source
ParametricDFT.ema_smoothMethod
ema_smooth(values::Vector{Float64}, alpha::Float64) -> Vector{Float64}

Compute exponential moving average for smoothing noisy loss curves.

Arguments

  • values::Vector{Float64}: Raw values to smooth
  • alpha::Float64: Smoothing factor in (0, 1). Higher values = more smoothing. Common values: 0.6 (light), 0.9 (heavy), 0.95 (very heavy).

Returns

  • Vector{Float64}: Smoothed values (same length as input)
source
ParametricDFT.entangled_qft_codeMethod
entangled_qft_code(m::Int, n::Int; entangle_phases=nothing, inverse=false, entangle_position=:back)

Generate an optimized tensor network representation of the entangled QFT circuit.

The entangled QFT extends the standard 2D QFT by adding entanglement gates Ek between corresponding row and column qubits. Each entanglement gate Ek is a controlled-phase gate with the same structure as the M gate in QFT:

Full gate form (4×4 matrix): Ek = diag(1, 1, 1, e^(i*phik))

Tensor network form (2×2 matrix): Ektensor = [1 0; 0 e^(i*phi_k)]

The gate acts on qubits (x{n-k}, y{n-k}), where phi_k is a learnable phase parameter. In the tensor network, these are represented as 2×2 matrices.

For a square 2^n × 2^n image (m = n), we add exactly n entanglement gates, one for each pair of corresponding row/column qubits.

Arguments

  • m::Int: Number of qubits for row indices (image height = 2^m)
  • n::Int: Number of qubits for column indices (image width = 2^n)
  • entangle_phases::Union{Nothing, Vector{<:Real}}: Initial phases for entanglement gates. If nothing, defaults to zeros (equivalent to standard QFT). Length must equal min(m, n).
  • inverse::Bool: If true, generate inverse transform code
  • entangle_position::Symbol: Where to place entanglement gates. One of:
    • :back (default): QFTrow ⊗ QFTcol → Entangle
    • :front: Entangle → QFTrow ⊗ QFTcol
    • :middle: Row and column QFT interleaved, with Ek placed after the row H(j) but BEFORE the column H(j). This produces a distinct result because Ek (a diagonal controlled-phase gate) does not commute with Hadamard gates.

Returns

  • optcode::AbstractEinsum: Optimized einsum contraction code
  • tensors::Vector: Circuit parameters (unitary matrices + entanglement gates)
  • n_entangle::Int: Number of entanglement gates added

Example

# Create entangled QFT for 64×64 images with default (zero) phases
optcode, tensors, n_entangle = entangled_qft_code(6, 6)

# Create with custom initial phases
phases = rand(6) * 2π
optcode, tensors, n_entangle = entangled_qft_code(6, 6; entangle_phases=phases)

# Create with entanglement at front
optcode, tensors, n_entangle = entangled_qft_code(6, 6; entangle_position=:front)

# Create with entanglement interleaved in middle
optcode, tensors, n_entangle = entangled_qft_code(6, 6; entangle_position=:middle)
source
ParametricDFT.entanglement_gateMethod
entanglement_gate(phi::Real)

Create the tensor network representation of a 2-qubit controlled-phase gate E with learnable phase phi.

Full gate form (4×4 matrix in computational basis): E = diag(1, 1, 1, e^(i*phi))

This applies phase e^(i*phi) only when both qubits are in state |1⟩.

Tensor network form (2×2 matrix): E_tensor = [1 0; 0 e^(i*phi)]

In the einsum tensor network decomposition, controlled-phase gates are represented as 2×2 matrices acting on the bond indices connecting the control and target qubits. This function returns the tensor form, not the full 4×4 gate matrix.

This gate has the same structure as the M gate in the QFT circuit, but with a learnable phase parameter instead of the fixed QFT phases (2π/2^k).

Arguments

  • phi::Real: Phase parameter (in radians)

Returns

  • Matrix{ComplexF64}: 2×2 matrix in tensor network form
source
ParametricDFT.extract_entangle_phasesMethod
extract_entangle_phases(tensors, entangle_indices::Vector{Int})

Extract the phase parameters from entanglement gate tensors.

Arguments

  • tensors::Vector: Circuit tensors
  • entangle_indices::Vector{Int}: Indices of entanglement gates

Returns

  • Vector{Float64}: Phase parameters phi_k for each entanglement gate
source
ParametricDFT.extract_tebd_phasesMethod
extract_tebd_phases(tensors, gate_indices::Vector{Int})

Extract the phase parameters from TEBD gate tensors.

Arguments

  • tensors::Vector: Circuit tensors
  • gate_indices::Vector{Int}: Indices of TEBD gates

Returns

  • Vector{Float64}: Phase parameters for each TEBD gate
source
ParametricDFT.fft_with_trainingMethod
fft_with_training(m::Int, n::Int, pic::Matrix, loss::AbstractLoss; steps::Int=1000, use_cuda::Bool=false)

Train a parametric 2D quantum DFT circuit using Riemannian gradient descent.

Arguments

  • m::Int: Number of qubits for row dimension (image height = 2^m)
  • n::Int: Number of qubits for column dimension (image width = 2^n)
  • pic::Matrix: Input signal (size must be 2^m × 2^n)
  • loss::AbstractLoss: Loss function (e.g., L1Norm())
  • steps::Int=1000: Maximum optimization iterations
  • use_cuda::Bool=false: Whether to use CUDA acceleration (not yet implemented)

Returns

  • Optimized parameters on the manifold (use point2tensors to convert to tensors)

Example

m, n = 6, 6  # For 64×64 image
pic = rand(ComplexF64, 2^m, 2^n)
# L1 norm loss
theta = fft_with_training(m, n, pic, L1Norm(); steps=200)
# MSE loss with truncation (keep top 100 elements)
theta = fft_with_training(m, n, pic, MSELoss(100); steps=200)
source
ParametricDFT.forward_transformMethod
forward_transform(basis::EntangledQFTBasis, image::AbstractMatrix)

Apply forward entangled QFT transform to convert image to frequency domain.

Arguments

  • basis::EntangledQFTBasis: The basis to use for transformation
  • image::AbstractMatrix: Input image (must be size 2^m × 2^n)

Returns

  • Frequency domain representation (Complex matrix of same size)
source
ParametricDFT.forward_transformMethod
forward_transform(basis::QFTBasis, image::AbstractMatrix)

Apply forward transform to convert image to frequency domain.

Arguments

  • basis::QFTBasis: The basis to use for transformation
  • image::AbstractMatrix: Input image (must be size 2^m × 2^n)

Returns

  • Frequency domain representation (Complex matrix of same size)
source
ParametricDFT.forward_transformMethod
forward_transform(basis::TEBDBasis, image::AbstractMatrix)

Apply forward TEBD transform to an image.

Arguments

  • basis::TEBDBasis: The basis to use for transformation
  • image::AbstractMatrix: Input image (must be 2^m × 2^n)

Returns

  • Transformed representation as matrix (same shape as input)
source
ParametricDFT.forward_transformMethod
forward_transform(basis::TEBDBasis, data::AbstractVector)

Apply forward TEBD transform to a vector.

Arguments

  • basis::TEBDBasis: The basis to use for transformation
  • data::AbstractVector: Input vector (must have length 2^(m+n))

Returns

  • Transformed representation (Complex vector of same length)
source
ParametricDFT.ft_matMethod
ft_mat(tensors::Vector, code::AbstractEinsum, m::Int, n::Int, pic::Matrix)

Apply 2D DFT to an image using the trained circuit parameters.

Arguments

  • tensors::Vector: Circuit tensors (unitary matrices)
  • code::AbstractEinsum: Optimized einsum code
  • m::Int: Number of qubits for row dimension
  • n::Int: Number of qubits for column dimension
  • pic::Matrix: Input image (size 2^m × 2^n)

Returns

  • Transformed image in frequency domain (size 2^m × 2^n)
source
ParametricDFT.generate_manifoldMethod
generate_manifold(tensors)

Generate product manifold for m+n-qubit QFT parameters. Returns a product of U(2) manifolds for Hadamard gates and U(1)^4 for controlled gates.

source
ParametricDFT.get_entangle_phasesMethod
get_entangle_phases(basis::EntangledQFTBasis)

Get the current entanglement phase parameters.

Returns

  • Vector{Float64}: Phase parameters phi_k for each entanglement gate
source
ParametricDFT.get_entangle_tensor_indicesMethod
get_entangle_tensor_indices(tensors, n_entangle::Int)

Identify which tensors in the circuit correspond to entanglement gates. Returns the indices of the entanglement gate tensors (the last n_entangle controlled-phase gates).

In the tensor network representation from yao2einsum, controlled-phase gates have the form [1, 1; 1, e^(i*phi)] (not diagonal). The entanglement gates are the last n_entangle such tensors after sorting (Hadamards first, then phase gates).

After training, tensors may drift slightly from the exact pattern, so we use tolerance-based magnitude checks.

Arguments

  • tensors::Vector: Circuit tensors
  • n_entangle::Int: Number of entanglement gates

Returns

  • Vector{Int}: Indices of entanglement gate tensors
source
ParametricDFT.get_manifoldMethod
get_manifold(basis::EntangledQFTBasis)

Get the product manifold for Riemannian optimization of basis parameters.

Returns

  • ProductManifold: Manifold structure for the tensors
source
ParametricDFT.get_manifoldMethod
get_manifold(basis::QFTBasis)

Get the product manifold for Riemannian optimization of basis parameters.

Returns

  • ProductManifold: Manifold structure for the tensors
source
ParametricDFT.get_manifoldMethod
get_manifold(basis::TEBDBasis)

Get the product manifold for Riemannian optimization of basis parameters.

Returns

  • ProductManifold: Manifold structure for the tensors
source
ParametricDFT.get_phasesMethod
get_phases(basis::TEBDBasis)

Get the current phase parameters.

Returns

  • Vector{Float64}: Phase parameters for each TEBD gate
source
ParametricDFT.get_tebd_gate_indicesMethod
get_tebd_gate_indices(tensors, n_gates::Int)

Identify which tensors correspond to TEBD gates.

Arguments

  • tensors::Vector: Circuit tensors
  • n_gates::Int: Number of TEBD gates

Returns

  • Vector{Int}: Indices of TEBD gate tensors
source
ParametricDFT.ift_matMethod
ift_mat(tensors::Vector, code::AbstractEinsum, m::Int, n::Int, pic::Matrix)

Apply inverse 2D DFT using the inverse QFT circuit with trained parameters.

Arguments

  • tensors::Vector: Circuit tensors (unitary matrices) from inverse QFT (use qft_code(m, n; inverse=true))
  • code::AbstractEinsum: Optimized einsum code from inverse QFT
  • m::Int: Number of qubits for row dimension
  • n::Int: Number of qubits for column dimension
  • pic::Matrix: Input in frequency domain (size 2^m × 2^n)

Returns

  • Transformed image in spatial domain (size 2^m × 2^n)
source
ParametricDFT.image_sizeMethod
image_size(basis::EntangledQFTBasis)

Return the supported image dimensions for this basis.

Returns

  • Tuple{Int,Int}: (height, width) = (2^m, 2^n)
source
ParametricDFT.image_sizeMethod
image_size(basis::QFTBasis)

Return the supported image dimensions for this basis.

Returns

  • Tuple{Int,Int}: (height, width) = (2^m, 2^n)
source
ParametricDFT.image_sizeMethod
image_size(basis::TEBDBasis)

Return the supported image dimensions for this basis.

Returns

  • Tuple{Int,Int}: (height, width) = (2^m, 2^n)
source
ParametricDFT.inverse_transformMethod
inverse_transform(basis::EntangledQFTBasis, freq_domain::AbstractMatrix)

Apply inverse entangled QFT transform to convert frequency domain back to image.

Arguments

  • basis::EntangledQFTBasis: The basis to use for transformation
  • freq_domain::AbstractMatrix: Frequency domain data (size 2^m × 2^n)

Returns

  • Reconstructed image (Complex matrix of same size)
source
ParametricDFT.inverse_transformMethod
inverse_transform(basis::QFTBasis, freq_domain::AbstractMatrix)

Apply inverse transform to convert frequency domain back to image.

Arguments

  • basis::QFTBasis: The basis to use for transformation
  • freq_domain::AbstractMatrix: Frequency domain data (size 2^m × 2^n)

Returns

  • Reconstructed image (Complex matrix of same size)
source
ParametricDFT.inverse_transformMethod
inverse_transform(basis::TEBDBasis, freq_domain::AbstractMatrix)

Apply inverse TEBD transform to a matrix.

Arguments

  • basis::TEBDBasis: The basis to use for transformation
  • freq_domain::AbstractMatrix: Frequency domain data (must be 2^m × 2^n)

Returns

  • Reconstructed data as matrix (same shape as input)
source
ParametricDFT.inverse_transformMethod
inverse_transform(basis::TEBDBasis, freq_domain::AbstractVector)

Apply inverse TEBD transform to convert back to original domain.

Arguments

  • basis::TEBDBasis: The basis to use for transformation
  • freq_domain::AbstractVector: Frequency domain data (length 2^(m+n))

Returns

  • Reconstructed data (Complex vector of same length)
source
ParametricDFT.load_basisMethod
load_basis(path::String) -> AbstractSparseBasis

Load a sparse basis from a JSON file.

Arguments

  • path::String: Path to the JSON file

Returns

  • AbstractSparseBasis: The loaded basis (concrete type depends on file contents)

Example

basis = load_basis("trained_basis.json")
freq = forward_transform(basis, image)
source
ParametricDFT.load_compressedMethod
load_compressed(path::String) -> CompressedImage

Load a compressed image from a JSON file.

Arguments

  • path::String: Path to the compressed image file

Returns

  • CompressedImage: The loaded compressed image
source
ParametricDFT.load_loss_historyMethod
load_loss_history(path::String) -> TrainingHistory

Load training loss history from a JSON file (saved by save_loss_history or save_loss_path) and return a TrainingHistory object that can be passed directly to plotting functions.

Arguments

  • path::String: Path to the JSON file

Returns

  • TrainingHistory: Object with train_losses, val_losses, step_train_losses, and basis_name

Example

using ParametricDFT
history = load_loss_history("training_loss.json")
fig = plot_training_loss(history)
save("loss_curve.png", fig)
source
ParametricDFT.loss_functionMethod
loss_function(tensors, m::Int, n::Int, optcode::AbstractEinsum, pic::Matrix, loss::AbstractLoss; inverse_code=nothing)

Compute the loss for current circuit parameters.

Arguments

  • tensors::Vector: Circuit parameters (unitary matrices)
  • m::Int: Number of qubits for row dimension
  • n::Int: Number of qubits for column dimension
  • optcode::AbstractEinsum: Optimized einsum code (forward transform)
  • pic::Matrix: Input signal (size must be 2^m × 2^n)
  • loss::AbstractLoss: Loss function type
  • inverse_code::AbstractEinsum: Optional inverse einsum code (required for MSELoss)

Returns

  • Loss value
source
ParametricDFT.num_gatesMethod
num_gates(basis::TEBDBasis)

Return the total number of TEBD gates.

Returns

  • Int: Number of gates (= m + n for ring topology)
source
ParametricDFT.num_parametersMethod
num_parameters(basis::EntangledQFTBasis)

Return the total number of learnable parameters in the basis.

For EntangledQFTBasis:

  • Standard QFT parameters (Hadamard gates + M gates)
  • Additional n entanglement gate phases (one per qubit pair)

Returns

  • Int: Total parameter count
source
ParametricDFT.num_parametersMethod
num_parameters(basis::QFTBasis)

Return the total number of learnable parameters in the basis.

For QFT basis:

  • n Hadamard gates: 4n parameters each (2×2 unitary)
  • n(n-1)/2 controlled-phase gates: 4 parameters each (diagonal 2×2)

Returns

  • Int: Total parameter count
source
ParametricDFT.num_parametersMethod
num_parameters(basis::TEBDBasis)

Return the total number of learnable parameters in the basis.

Returns

  • Int: Total parameter count
source
ParametricDFT.plot_training_comparisonMethod
plot_training_comparison(histories::Vector{TrainingHistory}; kwargs...)

Plot training loss curves for multiple bases on the same plot for comparison.

Arguments

  • histories::Vector{TrainingHistory}: Vector of training histories to compare

Keyword Arguments

  • title::String: Plot title (default: "Training Loss Comparison")
  • xlabel::String: X-axis label (default: "Epoch")
  • ylabel::String: Y-axis label (default: "Loss")
  • yscale::Function: Y-axis scale function (default: log10)
  • size::Tuple{Int,Int}: Figure size (default: (1000, 600))
  • loss_type::Symbol: Type of loss to plot (:train, :validation, or :both, default: :both)

Returns

  • Makie.Figure: The generated figure

Example

using ParametricDFT
basis1, hist1 = train_basis(QFTBasis, images; m=5, n=5, epochs=3)
basis2, hist2 = train_basis(TEBDBasis, images; m=5, n=5, epochs=3)
histories = [
    TrainingHistory(hist1.train_losses, hist1.val_losses, hist1.step_train_losses, "QFT"),
    TrainingHistory(hist2.train_losses, hist2.val_losses, hist2.step_train_losses, "TEBD")
]
fig = plot_training_comparison(histories)
save("training_comparison.png", fig)
source
ParametricDFT.plot_training_comparison_stepsMethod
plot_training_comparison_steps(histories::Vector{TrainingHistory}; kwargs...)

Plot per-step training loss curves for multiple bases on the same plot.

Arguments

  • histories::Vector{TrainingHistory}: Vector of training histories to compare

Keyword Arguments

  • title::String: Plot title (default: "Step Training Loss Comparison")
  • xlabel::String: X-axis label (default: "Step")
  • ylabel::String: Y-axis label (default: "Loss")
  • yscale::Function: Y-axis scale function (default: log10)
  • size::Tuple{Int,Int}: Figure size (default: (1000, 600))
  • smoothing::Float64: EMA smoothing factor in (0, 1), 0 disables smoothing (default: 0.0)

Returns

  • Makie.Figure: The generated figure

Example

using ParametricDFT
histories = [hist_qft, hist_entangled, hist_tebd]
fig = plot_training_comparison_steps(histories; smoothing=0.8)
save("step_comparison.png", fig)
source
ParametricDFT.plot_training_gridMethod
plot_training_grid(histories::Vector{TrainingHistory}; kwargs...)

Create a grid of individual training loss plots for multiple bases.

Arguments

  • histories::Vector{TrainingHistory}: Vector of training histories to plot

Keyword Arguments

  • title::String: Overall title (default: "Training Loss Comparison")
  • yscale::Function: Y-axis scale function (default: log10)
  • size::Tuple{Int,Int}: Total figure size (default: (1200, 800))
  • layout::Union{Nothing, Tuple{Int,Int}}: Grid layout (default: auto)

Returns

  • Makie.Figure: The generated grid figure

Example

using ParametricDFT
histories = [hist1, hist2, hist3]  # TrainingHistory objects
fig = plot_training_grid(histories, layout=(2, 2))
save("training_grid.png", fig)
source
ParametricDFT.plot_training_lossMethod
plot_training_loss(history::TrainingHistory; kwargs...)

Plot training and validation loss curves for a single basis.

Arguments

  • history::TrainingHistory: Training history to plot

Keyword Arguments

  • title::String: Plot title (default: "Training Loss - history.basis_name")
  • xlabel::String: X-axis label (default: "Epoch")
  • ylabel::String: Y-axis label (default: "Loss")
  • yscale::Function: Y-axis scale function (default: log10)
  • size::Tuple{Int,Int}: Figure size (default: (800, 500))

Returns

  • Makie.Figure: The generated figure

Example

using ParametricDFT
basis, history = train_basis(QFTBasis, images; m=5, n=5, epochs=3)
hist = TrainingHistory(history.train_losses, history.val_losses,
                       history.step_train_losses, "QFT")
fig = plot_training_loss(hist)
save("qft_training_loss.png", fig)
source
ParametricDFT.plot_training_loss_stepsMethod
plot_training_loss_steps(history::TrainingHistory; kwargs...)

Plot per-step training loss curve for a single basis.

Arguments

  • history::TrainingHistory: Training history to plot

Keyword Arguments

  • title::String: Plot title (default: "Step Training Loss - history.basis_name")
  • xlabel::String: X-axis label (default: "Step")
  • ylabel::String: Y-axis label (default: "Loss")
  • yscale::Function: Y-axis scale function (default: log10)
  • size::Tuple{Int,Int}: Figure size (default: (800, 500))
  • smoothing::Float64: EMA smoothing factor in (0, 1), 0 disables smoothing (default: 0.0)

Returns

  • Makie.Figure: The generated figure

Example

using ParametricDFT
basis, history = train_basis(QFTBasis, images; m=5, n=5, epochs=3)
hist = TrainingHistory(history.train_losses, history.val_losses, history.step_train_losses, "QFT")
fig = plot_training_loss_steps(hist; smoothing=0.8)
save("qft_step_loss.png", fig)
source
ParametricDFT.qft_codeMethod
qft_code(m::Int, n::Int; inverse=false)

Generate an optimized tensor network representation of the QFT circuit.

Arguments

  • m::Int: Number of qubits for row indices
  • n::Int: Number of qubits for column indices
  • inverse::Bool: If true, generate inverse transform code

Returns

  • optcode::AbstractEinsum: Optimized einsum contraction code
  • tensors::Vector: Initial circuit parameters (unitary matrices)
source
ParametricDFT.recoverMethod
recover(basis::AbstractSparseBasis, compressed::CompressedImage; verify_hash::Bool=true)

Recover an image from its compressed representation.

Arguments

  • basis::AbstractSparseBasis: The basis used for compression
  • compressed::CompressedImage: The compressed image data
  • verify_hash::Bool = true: Whether to verify basis hash matches

Returns

  • Matrix{Float64}: Reconstructed image (real-valued)

Example

basis = load_basis("trained_basis.json")
compressed = load_compressed("photo.cimg")
recovered = recover(basis, compressed)
source
ParametricDFT.save_basisMethod
save_basis(path::String, basis::AbstractSparseBasis)

Save a sparse basis to a JSON file.

Arguments

  • path::String: File path to save to (should end in .json)
  • basis::AbstractSparseBasis: The basis to save

Example

basis = train_basis(QFTBasis, images; m=6, n=6)
save_basis("trained_basis.json", basis)
source
ParametricDFT.save_compressedMethod
save_compressed(path::String, compressed::CompressedImage)

Save a compressed image to a JSON file.

Arguments

  • path::String: File path to save to
  • compressed::CompressedImage: The compressed image to save

Returns

  • String: The path where the file was saved
source
ParametricDFT.save_loss_historyMethod
save_loss_history(path::String, history::NamedTuple)

Save training loss history returned by train_basis to a JSON file.

The JSON contains:

  • basis_name: Name of the trained basis
  • epoch_losses: Array of {epoch, train_loss, val_loss} per epoch
  • step_losses: Array of {epoch, step, loss} per training step

Arguments

  • path::String: Output file path (.json)
  • history::NamedTuple: Training history returned by train_basis

Example

basis, history = train_basis(QFTBasis, images; m=5, n=5, epochs=3)
save_loss_history("training_loss.json", history)
source
ParametricDFT.save_training_plotsMethod
save_training_plots(histories::Vector{TrainingHistory}, output_dir::String; prefix::String="", smoothing::Float64=0.6)

Generate and save all training visualization plots to a directory.

Arguments

  • histories::Vector{TrainingHistory}: Vector of training histories
  • output_dir::String: Directory to save plots

Keyword Arguments

  • prefix::String: Prefix for filenames (default: "")
  • smoothing::Float64: EMA smoothing factor for step-level plots (default: 0.6). Set to 0.0 to disable smoothed plots.

Returns

  • Vector{String}: Paths to saved plot files

Example

using ParametricDFT
histories = [
    TrainingHistory(hist1.train_losses, hist1.val_losses, hist1.step_train_losses, "QFT"),
    TrainingHistory(hist2.train_losses, hist2.val_losses, hist2.step_train_losses, "TEBD")
]
save_training_plots(histories, "output/"; smoothing=0.8)
source
ParametricDFT.tebd_codeMethod
tebd_code(m::Int, n::Int; phases=nothing, inverse=false)

Generate an optimized tensor network representation of a 2D TEBD circuit with ring topology.

The TEBD circuit consists of:

  1. Hadamard layer: H gates on all m+n qubits (creates frequency basis)
  2. Two separate rings of controlled-phase gates:
    • Row ring: (1,2), (2,3), ..., (m-1,m), (m,1) for m gates on row qubits
    • Column ring: (m+1,m+2), (m+2,m+3), ..., (m+n-1,m+n), (m+n,m+1) for n gates on column qubits

This creates a 2D separable transform with periodic boundary conditions (ring topology).

Arguments

  • m::Int: Number of row qubits (row dimension = 2^m)
  • n::Int: Number of column qubits (column dimension = 2^n)
  • phases::Union{Nothing, Vector{<:Real}}: Initial phases for TEBD gates. If nothing, defaults to zeros. Length must equal m+n for ring topology.
  • inverse::Bool: If true, generate inverse transform code

Returns

  • optcode::AbstractEinsum: Optimized einsum contraction code
  • tensors::Vector: Circuit parameters (Hadamard gates + TEBD gate tensors)
  • n_row_gates::Int: Number of row ring phase gates (= m)
  • n_col_gates::Int: Number of column ring phase gates (= n)

Example

# Create 2D TEBD circuit for 3×3 (m=3 row qubits, n=3 col qubits)
optcode, tensors, n_row, n_col = tebd_code(3, 3)

# Create with custom initial phases (6 gates total: 3 row ring + 3 col ring)
phases = rand(6) * 2π
optcode, tensors, n_row, n_col = tebd_code(3, 3; phases=phases)
source
ParametricDFT.topk_truncateMethod
topk_truncate(x::AbstractMatrix, k::Integer)

Return a matrix where frequency-dependent truncation is applied for image compression. Low-frequency components (near center) are kept with higher priority, while high-frequency components (away from center) are kept with lower priority.

This is more appropriate for image compression than global top-k selection, as it preserves more low-frequency information which contains most of the image structure.

source
ParametricDFT.train_basisMethod
train_basis(::Type{EntangledQFTBasis}, dataset::Vector{<:AbstractMatrix}; kwargs...)

Train an EntangledQFTBasis on a dataset of images using Riemannian gradient descent.

Arguments

  • ::Type{EntangledQFTBasis}: The basis type to train
  • dataset::Vector{<:AbstractMatrix}: Training images (each must be 2^m × 2^n)

Keyword Arguments

  • m::Int, n::Int: Number of qubits for rows/columns
  • entangle_phases::Union{Nothing, Vector{<:Real}}: Initial phases (default: zeros)
  • loss, epochs, steps_per_image, validation_split, shuffle, early_stopping_patience, verbose: Same as QFTBasis
  • save_loss_path::Union{Nothing, String} = nothing: Path to save training loss history as JSON

Returns

  • Tuple{EntangledQFTBasis, NamedTuple}: Trained basis and training history
    • basis: Trained EntangledQFTBasis with optimized entanglement phases
    • history: NamedTuple with fields train_losses, val_losses, step_train_losses, basis_name

Example

k = round(Int, 64 * 64 * 0.1)
basis, history = train_basis(EntangledQFTBasis, images; m=6, n=6, loss=MSELoss(k), epochs=5)
phases = get_entangle_phases(basis)
println("Training loss per epoch: ", history.train_losses)
source
ParametricDFT.train_basisMethod
train_basis(::Type{QFTBasis}, dataset::Vector{<:AbstractMatrix}; kwargs...)

Train a QFTBasis on a dataset of images using Riemannian gradient descent.

Arguments

  • ::Type{QFTBasis}: The basis type to train
  • dataset::Vector{<:AbstractMatrix}: Training images (each must be 2^m × 2^n)

Keyword Arguments

  • m::Int: Number of qubits for rows (image height = 2^m)
  • n::Int: Number of qubits for columns (image width = 2^n)
  • loss::AbstractLoss = MSELoss(k): Loss function for training
  • epochs::Int = 3: Number of training epochs
  • steps_per_image::Int = 200: Gradient descent steps per image
  • validation_split::Float64 = 0.2: Fraction of data for validation
  • shuffle::Bool = true: Whether to shuffle data each epoch
  • early_stopping_patience::Int = 2: Epochs without improvement before stopping
  • verbose::Bool = true: Whether to print training progress
  • save_loss_path::Union{Nothing, String} = nothing: Path to save training loss history as JSON

Returns

  • Tuple{QFTBasis, NamedTuple}: Trained basis and training history
    • basis: Trained QFTBasis with optimized parameters
    • history: NamedTuple with fields train_losses, val_losses, step_train_losses, basis_name

Example

images = [load_image(path) for path in image_paths]
k = round(Int, 64 * 64 * 0.1)  # Keep 10% of coefficients
basis, history = train_basis(QFTBasis, images; m=6, n=6, loss=MSELoss(k), epochs=5,
                             save_loss_path="qft_loss.json")
save_basis("trained_basis.json", basis)
println("Final training loss: ", history.train_losses[end])
source
ParametricDFT.train_basisMethod
train_basis(::Type{TEBDBasis}, dataset::Vector{<:AbstractMatrix}; kwargs...)

Train a TEBDBasis on a dataset of images using Riemannian gradient descent.

Arguments

  • ::Type{TEBDBasis}: The basis type to train
  • dataset::Vector{<:AbstractMatrix}: Training images (each must be 2^m × 2^n)

Keyword Arguments

  • m::Int: Number of row qubits (image height = 2^m)
  • n::Int: Number of column qubits (image width = 2^n)
  • phases::Union{Nothing, Vector{<:Real}}: Initial phases for TEBD gates. If nothing, uses small random values (randn * 0.1) to break symmetry. Length must be m+n for ring topology.
  • loss, epochs, steps_per_image, validation_split, shuffle, early_stopping_patience, verbose: Same as QFTBasis
  • save_loss_path::Union{Nothing, String} = nothing: Path to save training loss history as JSON

Returns

  • Tuple{TEBDBasis, NamedTuple}: Trained basis and training history
    • basis: Trained TEBDBasis with optimized parameters
    • history: NamedTuple with fields train_losses, val_losses, step_train_losses, basis_name

Example

k = round(Int, 64 * 64 * 0.1)  # Keep 10% of coefficients
basis, history = train_basis(TEBDBasis, images; m=6, n=6, loss=MSELoss(k), epochs=5)
println("Validation loss per epoch: ", history.val_losses)
source
ParametricDFT.train_basis_from_filesFunction
train_basis_from_files(::Type{QFTBasis}, image_paths::Vector{String}; 
                      target_size::Int, kwargs...)

Train a QFTBasis from image files, automatically resizing to power-of-2 dimensions.

Arguments

  • ::Type{QFTBasis}: The basis type to train
  • image_paths::Vector{String}: Paths to image files

Keyword Arguments

  • target_size::Int: Target size (must be power of 2, e.g., 64, 128, 256, 512)
  • kwargs...: Additional arguments passed to train_basis

Returns

  • QFTBasis: Trained basis

Note

This function requires the Images.jl package to be loaded.

source