Blog

A blog of Python-related topics and code.

Visualizing vibronic transitions in a diatomic molecule

The Morse oscillator code introduced in a previous blog post can be used to visualize the vibronic transitions in a diatomic molecule by creating two Morse objects (one for each electronic state) and plotting their potential energy curves and energy levels on the same Matplotlib Axes object.

Plotting nuclide halflives

More than 3000 nuclides (atomic species characterised by the number of neutrons and protons in their nuclei) are known, most of them radioactive with a half-life of less than an hour. About 250 or so of them are stable (not observed to decay using presently-available instruments). The IAEA has an interactive online browser of the nuclides.

A shallow neural network for simple nonlinear classification

Classification problems are a broad class of machine learning applications devoted to assigning input data to a predefined category based on its features. If the boundary between the categories has a linear relationship to the input data, a simple logistic regression algorithm may do a good job. For more complex groupings, such as in classifying the points in the diagram below, a neural network can often give good results.

Plotting the decision boundary of a logistic regression model

In the notation of this previous post, a logistic regression binary classification model takes an input feature vector, $\boldsymbol{x}$, and returns a probability, $\hat{y}$, that $\boldsymbol{x}$ belongs to a particular class: $\hat{y} = P(y=1|\boldsymbol{x})$. The model is trained on a set of provided example feature vectors, $\boldsymbol{x}^{(i)}$, and their classifications, $y^{(i)} = 0$ or $1$, by finding the set of parameters that minimize the difference between $\hat{y}^{(i)}$ and $y^{(i)}$ in some sense.

Logistic regression for image classification

Simple logistic regression is a statistical method that can be used for binary classification problems. In the context of image processing, this could mean identifying whether a given image belongs to a particular class ($y=1$) or not ($y=0$), e.g. "cat" or "not cat". A logistic regression algorithm takes as its input a feature vector $\boldsymbol{x}$ and outputs a probability, $\hat{y} = P(y=1|\boldsymbol{x})$, that the feature vector represents an object belonging to the class. For images, the feature vector might be just the values of the red, green and blue (RGB) channels for each pixel in the image: a one-dimensional array of $n_x = n_\mathrm{height} \times n_\mathrm{width} \times 3$ real numbers formed by flattening the three-dimensional array of pixel RGB values. A logistic regression model is so named because it calculates $\hat{y} = \sigma(z)$ where $$ \sigma(z) = \frac{1}{1+\mathrm{e}^{-z}} $$ is the logistic function and $$ z = \boldsymbol{w}^T\boldsymbol{x} + b, $$ for a set of parameters, $\boldsymbol{w}$ and $b$. $\boldsymbol{w}$ is a $n_x$-dimensional vector (one component for each component of the feature vector) and b is a constant "bias".