Tag: Optimization
-
Krylov Matrix Exponential
Welcome file The Matrix exponential Consider a vector-valued function v⃗(t)\vec{v}(t)v(t) whose total derivative with respect to its argument is given explicitly by a linear operator AAA (which we’ll represent as a matrix): dv⃗(t)dt=v⃗(t)A.\frac{d\vec{v}(t)}{dt} = \vec{v}(t) A .dtdv(t)=v(t)A. Suppose at some ttt we know the function’s value v⃗(0)\vec{v}(0)v(0). (N.B. We’ve chosen t=0t=0t=0 here without losing generality;…
-
Which method should I use? – A guide to benchmarking
time_benchmark.md Which method should I use? – a tutorial on benchmarking Introduction The five methods to reverse a list in Python Measuring time Measuring scaling Conclusion Introduction Last week LinkedIn has shown me the following post The LinkedIn post left out the original source which, after some reverse image search, I found to be the…
-
What is autograd? – Automatic differentiation and optimization with PyTorch
Welcome file Automatically calculating a gradient The gradient descent method Conclusion In recent years, there has been a marked increase in media and academic attention towards advancements in artificial intelligence, particularly concerning deep neural networks. The most pronounced surge in attention was observed following the release of ChatGPT last November. Although the usage of these…