Thesis (Diffusion Coefficients) Flashcards
(Theoretical, Machine Learning)
What is the activation function? What have you considered in this study?
The activation function is used to consolidate weights and biases of a neuron in a neural network. This function can take a lot of forms but for this study, sigmoidal and rectified linear unit (ReLU) functions were considered. Other sigmoidal and LU forms are considered but not prioritized such as the Leaky ReLU and Softmax.
(Theoretical, Machine Learning)
What is the cost function? How does it relate to the loss function?
The loss function shows the error between the true data (test data) to the predicted data. This is usually presented in terms of the mean squared errors (MSE). The cost function is the average of all these MSEs. In summary, loss functions are on a per neuron basis while cost functions are for the overall model. The goal of backpropagation on Neural Networks is to minimize the cost functions.
(Theoretical, Machine Learning)
What will be tools in doing the artificial neural network? Is it timely or highly complex?
The group aims to use TensorFlow in Python 3 for its neural network. With a small data set of about 600 data points in total, it is faster to use a familiar tool. For neural networks, the time complexity increases from the number of matrix multiplications considered in the program. For example, M_ij * M_jk will have a complexity of O(ijk).
(Theoretical, Experimental Design)
Why did you choose full factorial method over other experimental designs?
This decision was brought from the fact that the neural network requires a robust data set. Meanwhile, due to time and economic constraints, the full factorial experimental design offers the largest data set available for model training without undergoing one factor at a time (OFAT) design.
(Theoretical, Electrochemistry)
The given expression for molar conductivity is known by:
∧m = K / c
where ∧m is the molar conductivity, K is the specific conductivity, and c is the molar concentration
Based on this, we might have inferred that molar concentration (c) is inversely proportional to the molar conductivity (∧m) — or in other words, if you decrease the concentration, the molar conductivity will increase. But in actual, this is NOT the case. Explain why.
On the first look, it may seem that there is an inverse relation between molar conductivity (∧m) and molar concentration (c), but take note that the specific conductivity K is also dependent on concentration. As we decrease the molar concentration, the specific conductivity also decreases.
(Theoretical, Mass Transfer)
Explain the limitations of the Fick’s Law and the failures of the classical diffusion equation.
(Practical, Machine Learning)
How do you minimize the cost function?
In applications of machine learning, the minimization of the cost function gradient is dependent on gradient descent. The steepness of the function reduces over iterations and by taking the negative of steepest ascent of the function.