MLFlow: A platform to streamlining Machine Learning Workflows

At BlablaConf, I had the privilege to give a talk all about MLflow, a handy tool that helps make machine learning projects run smoother. I broke down how it keeps track of different experiments you do, how it bundles up your models neatly, and how it makes deploying them a breeze. I stressed how important it is for teams to work well together in data science, and MLflow makes that happen by keeping everything organized and reproducible....

Mon May 6, 2024 · 1 min · 123 words · Me

Radar human activity recognition with machine learning

During my phd years, I made a demo of indoor human activity recognition using FMCW radar and machine learning. The schematic below shows the end to end system from the sensor node to the cloud. I was involved in the different stages of this work from the sensor configuration and the signal processing to building a convolutional neural network for Doppler maps classification. This schematic shows the workflow of the demo from the sensor edge to the cloud....

Wed May 1, 2024 · 1 min · 206 words · Me

Regularization techniques

Regularization is a widely used technique since it brings a positive impact on the model’s performance. However, there are many methods of regularization. I will try to go through some of them in this post. First, what is regularization? Regularization is a technique that helps a machine learning model avoid overfitting and enhance it generalization capabilities over unseen data. Regularization techniques 1. L1, L2 regularization L2 and L1 regularization put a constraint on the model’s weights an biases....

Fri August 11, 2023 · 3 min · 455 words · Me

Visualizing optimization algorithms

1. Gradient Descent Gradient descent is an optimization algorithm that iteratively adjusts model parameters to minimize a function, typically a loss function. By moving in the direction of the steepest decrease, determined by the gradient, it helps find the optimal parameters that best fit the data. Equation: θ = θ - α * ∇J(θ) Description: θ: Parameters (weights) of the model being optimized. α (alpha): Learning rate, determines the size of the steps taken during optimization....

Mon May 1, 2023 · 4 min · 784 words · Me