-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggestion: Adding Loss functions of Machine Learning in maths folder #559
Comments
Hi @GreatRSingh , |
@Navaneeth-Sharma Can you put up a list of loss functions that you are going to implement and which you have already implemented? |
Sorry @GreatRSingh , |
@Navaneeth-Sharma ok I will do that. |
Hi, Just an info so that we dont implement the same losses, I will try to take up these loss functions
|
@Navaneeth-Sharma No none of them are already taken. I will take up MSE, NLL, MAE, MarginRanking, KLDivergence. |
@Navaneeth-Sharma @siriak I have added a list of Loss Functions in the description. |
This issue has been automatically marked as abandoned because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Working on adding Hinge Loss |
This issue has been automatically marked as abandoned because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Are all these functions taken ? |
No, you can start working on them.
…On Fri, Jan 5, 2024, 12:21 AM Eddy Oyieko ***@***.***> wrote:
Are all these functions taken ?
—
Reply to this email directly, view it on GitHub
<#559 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALY6WHSWG3XIZJPTNJ44IDLYM32Z5AVCNFSM6AAAAAA5ZIPZLCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZXGU4TONZZGU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Opened a PR to add KL divergence loss as a sub task of this issue. #656 |
I have opened a PR that implements Huber loss function, which is mentioned is this issue #697 🚀. |
Are there any function that can still be worked on? |
I think NLL and Marginal Ranking from the list are still not implemented |
Finished implementing both, but have a question regarding the PR. Should I open 2 separate PR's or 1 PR containing both algorithms? |
2 separate PRs please |
When opening a PR, please make sure that your PR is on a separate branch like |
Adding loss functions to a Rust-based repository focused on algorithms involves a deliberate approach to ensure correctness, usability, and maintainability. Here is a structured plan to implement the requested loss functions in the Title: Adding Loss Functions for Machine LearningRepository: TheAlgorithms / Rust Suggested by: GreatRSingh DescriptionLoss functions are crucial for assessing the performance of machine learning models. They estimate how well an algorithm models the provided data and guide the optimization process. This suggestion involves adding various loss functions to the repository under a dedicated folder. Task List
Technical Approach1. Folder StructureCreate the following folder structure:
2. Implement Loss FunctionsProvide a base Rust file for each loss function with the implementation. Below are outlines and examples: 2.1 Cross-Entropy Loss File: pub fn cross_entropy_loss(predictions: &[f64], targets: &[f64]) -> f64 {
predictions.iter().zip(targets.iter())
.map(|(&p, &t)| -t * p.ln())
.sum()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_cross_entropy_loss() {
let predictions = vec![0.9, 0.1];
let targets = vec![1.0, 0.0];
let loss = cross_entropy_loss(&predictions, &targets);
assert!((loss - 0.1053).abs() < 0.0001);
}
} 2.2 Hinge Loss File: pub fn hinge_loss(predictions: &[f64], targets: &[f64]) -> f64 {
predictions.iter().zip(targets.iter())
.map(|(&p, &t)| (1.0 - t * p).max(0.0))
.sum()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_hinge_loss() {
let predictions = vec![0.6, -0.4];
let targets = vec![1.0, -1.0];
let loss = hinge_loss(&predictions, &targets);
assert!((loss - 1.0).abs() < 0.0001);
}
} 2.3 Mean Squared Error (MSE) File: pub fn mean_squared_error(predictions: &[f64], targets: &[f64]) -> f64 {
predictions.iter().zip(targets.iter())
.map(|(&p, &t)| (p - t).powi(2))
.sum::<f64>() / predictions.len() as f64
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_mean_squared_error() {
let predictions = vec![3.0, -0.5, 2.0, 7.0];
let targets = vec![2.5, 0.0, 2.0, 8.0];
let loss = mean_squared_error(&predictions, &targets);
assert!((loss - 0.375).abs() < 0.0001);
}
} 2.4 Huber Loss File: pub fn huber_loss(predictions: &[f64], targets: &[f64], delta: f64) -> f64 {
predictions.iter().zip(targets.iter())
.map(|(&p, &t)| {
let diff = (p - t).abs();
if diff <= delta {
0.5 * diff.powi(2)
} else {
delta * (diff - 0.5 * delta)
}
})
.sum::<f64>() / predictions.len() as f64
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_huber_loss() {
let predictions = vec![3.0, -0.5, 2.0, 7.0];
let targets = vec![2.5, 0.0, 2.0, 8.0];
let loss = huber_loss(&predictions, &targets, 1.0);
assert!((loss - 0.3125).abs() < 0.0001);
}
} 2.5 Negative Log-Likelihood (NLL) File: pub fn negative_log_likelihood(predictions: &[f64], targets: &[f64]) -> f64 {
predictions.iter().zip(targets.iter())
.map(|(&p, &t)| -t * p.ln())
.sum::<f64>() / predictions.len() as f64
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_negative_log_likelihood() {
let predictions = vec![0.9, 0.1];
let targets = vec![1.0, 0.0];
let loss = negative_log_likelihood(&predictions, &targets);
assert!((loss - 0.1053).abs() < 0.0001);
}
} 2.6 Mean Absolute Error (MAE) File: pub fn mean_absolute_error(predictions: &[f64], targets: &[f64]) -> f64 {
predictions.iter().zip(targets.iter())
.map(|(&p, &t)| (p - t).abs())
.sum::<f64>() / predictions.len() as f64
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_mean_absolute_error() {
let predictions = vec![3.0, -0.5, 2.0, 7.0];
let targets = vec![2.5, 0.0, 2.0, 8.0];
let loss = mean_absolute_error(&predictions, &targets);
assert!((loss - 0.5).abs() < 0.0001);
}
} 2.7 Marginal Ranking Loss File: pub fn marginal_ranking_loss(output1: &[f64], output2: &[f64], target: &[f64], margin: f64) -> f64 {
output1.iter().zip(output2.iter()).zip(target.iter())
.map(|((&o1, &o2), &t)| ((1.0 - t * (o1 - o2) + margin).max(0.0)))
.sum::<f64>() / output1.len() as f64
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_marginal_ranking_loss() {
let output1 = vec![0.2, 0.3, 0.4];
let output2 = vec![0.1, 0.2, 0.3];
let target = vec![1.0, 0.0, -1.0];
let loss = marginal_ranking_loss(&output1, &output2, &target, 1.0);
assert!((loss - 1.0).abs() < 0.0001);
}
} 2.8 KL Divergence File: pub fn kl_divergence(p: &[f64], q: &[f64]) -> f64 {
p.iter().zip(q.iter())
.map(|(&p_i, &q_i)| p_i * (p_i / q_i).ln())
.sum()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_kl_divergence() {
let p = vec![0.4, 0.6];
let q = vec![0.5, 0.5];
let divergence = kl_divergence(&p, &q);
assert!((divergence - 0.02893).abs() < 0.0001);
}
} DocumentationUpdate the project documentation to include information about the newly added loss functions. Provide an overview of each loss function, its usage, and examples. Contribution Guidelines
Review and Merge
DisclaimerThis solution is a collaborative effort to enhance the TheAlgorithms / Rust repository by adding valuable machine learning loss functions. May the knowledge and effort benefit many, and may the contributions be recognized and appreciated. Thank you for the opportunity to contribute! May Allah bless our endeavors with success. |
I would like to suggest adding Loss functions in this repo.
The loss function estimates how well a particular algorithm models the provided data.
Add Loss Functions to:
machine_learning/loss_functions
Task List:
The text was updated successfully, but these errors were encountered: