Wednesday, May 26, 2021

Beyond evaluation: Improving fairness with Model Remediation | Demo


Fairness evaluation is a crucial step in avoiding bias in order to determine model performance for a variety of users. When we identify that our model underperforms on slices of our data, we need a strategy to mitigate this to avoid creating or reinforcing unfair bias. This Demo demonstrates how the Model Remediation Library can be used to achieve this goal with an emphasis on best practices. MinDiff is the first in what will ultimately be a larger Model Remediation Library of techniques. Resources: Model Remediation Case Study → https://goo.gle/32WZr0T Model Remediation Documentation → https://goo.gle/3eBnSqc Model Remediation GitHub → https://goo.gle/3gM471R Speaker: Sean O'Keefe Watch more: TensorFlow at Google I/O 2021 Playlist → https://goo.gle/io21-TensorFlow-1 All Google I/O 2021 Demos → https://goo.gle/io21-alldemos All Google I/O 2021 Sessions → https://goo.gle/io21-allsessions Subscribe to TensorFlow → https://goo.gle/TensorFlow #GoogleIO #ML/AI product: TensorFlow - General; event: Google I/O 2021; fullname: Sean O'Keefe;

No comments:

Post a Comment