
This is higher than that of all 200 randomly sampled designs, which amount to roughly 4000 PDE solves. Starting from a randomly sampled initial design with 6.49% PCE, the algorithm terminates after only 306 PDE solves, arriving at an optimal point with PCE 21.62%. SLSQP has been implemented in several open-source tools, including Optim mogensen2018optim, NLOpt johnson2014nlopt, PyOpt perez2012pyopt and Scipy virtanen2020scipy. įor our nonlinear constrained optimization problem we select the Sequential Least Squares Programming (SLSQP) method kraft1989slsqp. Φ F = E i, 0 + k B T log ( N d, 0 n i, 0 ) Φ B = E i, L − k B T log ( N a, L n i, L ). An arsenal of well-developed nonlinear optimization algorithms become available with the introduction of analytical gradients. Where gradients are available, however, optimization becomes a much easier task, and this becomes starker as the number of parameters grows. Taking brute-force optimization as an example, this challenge is owing to the number of samples required to comprehensively search a set of box-constrained parameters growing exponentially in the number of parameters. In particular, it is often intractable to simultaneously optimize more than several parameters in conjunction.
PC1D EXAMPLE PROBLEM SIMULATOR
Where the solar cell simulator is treated as a black box without any additional information, optimization becomes a data-intensive task. Traditionally, solar cell optimization has been done with a variety of gradient-free black box optimization techniques, such as Particle Swarm and Genetic Algorithm pvoptim.

The most obvious use case would be gradient-based PCE optimization. In this section we demonstrate several benefits of having a differentiable simulator. In order to compute the power conversion efficiency (PCE) of a PV cell, the DD equations must be solved for multiple reverse bias voltages from zero up to the open-circuit voltage V o c where the current through the cell reaches zero.

For these problems, a preconditioner can dramatically speed up convergence we use an I L U ( 0 ) preconditioner, as detailed in in E.

Since this method is matrix-free, it allows us to exploit the sparsity of our problem by directly providing the linear operator representing the product of the Jacobian and an arbitrary vector. 2 is solved with the preconditioned generalized minimal residual method (GMRES), which was recently included in JAX. The DD itself is discretized using the Scharfetter–Gummel scheme, which is based on finite differences (for details see D). To mitigate these issues, we employ a modified version of the Newton algorithm, as described in F. Furthermore, the DD model contains exponentials of the potentials, often resulting in convergence stagnation.
PC1D EXAMPLE PROBLEM TRIAL
The Newton method displays quadratic convergence only when the trial function is sufficiently close to the root newton, and is not numerically robust when used on its own in practice. Note that we use subscripts to refer to partial derivatives, except when it is clear from the context that it is a vector index. Equation 2 is stopped upon convergence, i.e.
PC1D EXAMPLE PROBLEM FULL
∂ P V, therefore, complements the existing offer of AD-based solvers and, relying on the composability of these tools, potentially enables end-to-end differentiability of more advanced multiphysics simulations where the DD model is coupled with full Maxwell equations.į u ( u ( i ) ) Δ u ( i ) = − f ( u ( i ) ) ,

In passing, we note that, thanks to AD libraries such as JAX jax2018github, Autograd maclaurin2015autograd and Zygote rackauckas2020generalized, several AD-enhanced simulators have been released, including for molecular dynamics schoenholz2020jax eastman2017openmm, fluid dynamics kochkov2021machine, kinetics goodrich2021designing, optics oskooi2010meep and general purpose solvers hu2019difftaichi. By computing derivatives with AD, we avoid numerical and scaling issues associated with the finite-difference technique, while adding complexity comparable to that of the original problem baydin2018automatic. Thus, this new computational tool enables extensive, efficient materials optimization for PV cells, and can be used in conjunction with standard optimization methods, or machine learning algorithms. ∂ P Vis able to compute not only the efficiency of the solar cell but also its derivative with respect to any material property set by the user. In this work we present ∂ P V, a 1D simulation tool for PV cells which solves the drift-diffusion equations using the JAX automatic differentiation (AD) package jax2018github.
