Artwork

תוכן מסופק על ידי David Yakobovitch. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי David Yakobovitch או שותף פלטפורמת הפודקאסט שלו. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

How to Repair Trust and Enable Ethics by Design for Machine Learning with Ben Byford

46:48
 
שתפו
 

Manage episode 280992914 series 2512650
תוכן מסופק על ידי David Yakobovitch. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי David Yakobovitch או שותף פלטפורמת הפודקאסט שלו. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

[Audio]

Podcast: Play in new window | Download

Subscribe: Google Podcasts | Spotify | Stitcher | TuneIn | RS

Ben Byford has been a freelance web designer since 2009, and he is now mostly a freelance AI / ML teacher, speaker and ethicist and tinkerer – in his spare time he makes computer games. Ben has worked on large scale projects as a web designer with companies such as Virgin.com, medium scale projects with clients including BFI, CEH, Virgin and Virgin unite, as well as having created a myriad of sites for smaller businesses, startups and creatives' portfolios.

He’s mostly been a design and front-end guy, with extensive knowledge of other tech and development languages and has previously worked as a mediator between dev teams and clients. His public speaking and lecturing blends his insights within AI and ethics, web technologies, and entrepreneurship; focusing on the usage of technology as a tool for innovation and creativity.

He hosts the Machine Ethics Podcast, which consists of interviews with academics, writers, technologists and business people on the theme of AI and autonomy.He also talks about Machine Ethics.

Episode Links:

Ben Byford’s LinkedIn: https://www.linkedin.com/in/ben-byford/

Ben Byford’s Twitter: @benbyford

Ben Byford’s Website: https://www.benbyford.com/

Podcast Details:

Podcast website: https://www.humainpodcast.com

Apple Podcasts: https://podcasts.apple.com/us/podcast/humain-podcast-artificial-intelligence-data-science/id1452117009

Spotify: https://open.spotify.com/show/6tXysq5TzHXvttWtJhmRpS

RSS: https://feeds.redcircle.com/99113f24-2bd1-4332-8cd0-32e0556c8bc9

YouTube Full Episodes: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag

YouTube Clips: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag/videos

Support and Social Media:

– Check out the sponsors above, it’s the best way to support this podcast

– Support on Patreon: https://www.patreon.com/humain/creators

– Twitter: https://twitter.com/dyakobovitch

– Instagram: https://www.instagram.com/humainpodcast/

– LinkedIn: https://www.linkedin.com/in/davidyakobovitch/

– Facebook: https://www.facebook.com/HumainPodcast/

– HumAIn Website Articles: https://www.humainpodcast.com/blog/

Outline:

Here’s the timestamps for the episode:

(00:00) – Introduction

(01:24) – The big question and a moral quandary which we're battling with is how much information do we give to organizations, governments, about our movements - and that's always been the case - but we're now having to think differently in the face of a pandemic, about how much we can give away, and what kinds of things can be done with that data.

(03:31) – You're really concerned with whom you're giving that data to. And can they be transparent about how they're using that data, and have that data secure, and be able to delete that data when appropriate. And it's very hard to actually believe or have trust in organizations when they say these things. it's a good thing to be doing, but the trust issue is a big one.

(06:51) – Whether Americans have a similar legislation put in place is, in my opinion, irrelevant. Because the internet is cross boundary, cross continental. So, if you deal with anyone outside of your own jurisdiction, your own country, then you will fall into someone else's legislation. And it just so happens that GDPR is one of the most robust that we have at the moment, to do with data.

(09:19) – We should be teaching people to reflect on the situation within our educational institutions, so that we are priming people who are going to be making this stuff in the future, to be making design decisions and technical decisions that they can implement it in full respect of other people, and for the respect of the environment.

(12:20) – We should all be worried about security, as citizens and our data privacy as citizens, because we don't necessarily want to tell everyone what we're talking about, and that comes into our discrimination issue. So, you can be discriminated against in different countries, for all sorts of different things. And you might not want to tell your neighbor or your government certain things about your person, because those things aren't deemed in that country normal or acceptable or legal.

(13:22) – There are many reasons why you would want to keep your privacy and your security intact.You're using a utility, and the utility doesn't respect the user. We're saying water and electricity is a general need, a civil need, I think the internet is certainly up there as a civil need.

(17:40) – As you're building technology, you have to require consent under GDPR. You have to stipulate usage under GDPR, and you have to give terms of access under GDPR. So, if you are to be amended or deleted for your delayed data or have your data shared to the user, what specific data they have on them. All that has to be implemented. And if you don't implement that, then you could be taken to court and sued for a lot of money. Now it's illegal to be doing some of that stuff, but within the ethics of AI and the ethics of technology and kind of the ethics of mass automation, we have to really go beyond what is under GDPR, beyond what is legal, illegal and think about again, what is it the world we're we're making? What is equitable to most people? What is useful to people and what isn't just useful to shareholders.

(22:34) – Face tracking stuff is great. It's a microcosm of what is essentially a really big ethical quandary, which has positive and negative effects. So it is really interesting and really frightening in the same way. You have to create trust. And if it is known that these machines are very good and work very well, and the information maybe doesn't really leave the robot in any meaningful way, or is anonymized in all aspects and isn't actually restricting the citizens mobility we've built something that actually really does work and works enough, and knowing when it works enough is an ethical question. And then also, allowing humans to be in the loop somewhere.

(26:53) – The obvious contradiction here is that the Chinese system seems to be very heavy handed in its use of technology to implement those social norms. We don't really have a similar approach, I don't think, in the West.

(34:02) – You have all these really good applications, all these really interesting applications. And then you have applications which then restrict people's rights or human rights. And again, it might be that we have to look at what human rights actually mean in the digital world.

(38:04) – We want to live in a world where George Floyd or anyone who is discriminated against traditionally in a society can walk up to a police officer, can walk up to a person of power in that society and know that they are going to be trustful, trustworthy, wherever your trust in any situation, you don't want to be in a situation where you are in grave danger and you can't trust your own environment.

(39:11) – There has been a wealth of interest in ethics and technology and, in Data Science and Machine Learning, and AI has just been an explosion. I'm seeing that with the emergence of quite a few workshops and talks and conversations around AI, responsibility, transparency, and diversity and equity and all those sorts of terms. Into the future, I am most interested in how the interaction of moral agencies appears in technologies that we actually use and within society's reaction to it.

(44:19) – Be mindful. We all have our autonomy and we all should be thinking about the things that we are doing, and you should be empowered to think about what you are doing. It's easy for me to say it on this podcast, but please be mindful of how you affect the world.

Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy

  continue reading

119 פרקים

Artwork
iconשתפו
 
Manage episode 280992914 series 2512650
תוכן מסופק על ידי David Yakobovitch. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי David Yakobovitch או שותף פלטפורמת הפודקאסט שלו. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

[Audio]

Podcast: Play in new window | Download

Subscribe: Google Podcasts | Spotify | Stitcher | TuneIn | RS

Ben Byford has been a freelance web designer since 2009, and he is now mostly a freelance AI / ML teacher, speaker and ethicist and tinkerer – in his spare time he makes computer games. Ben has worked on large scale projects as a web designer with companies such as Virgin.com, medium scale projects with clients including BFI, CEH, Virgin and Virgin unite, as well as having created a myriad of sites for smaller businesses, startups and creatives' portfolios.

He’s mostly been a design and front-end guy, with extensive knowledge of other tech and development languages and has previously worked as a mediator between dev teams and clients. His public speaking and lecturing blends his insights within AI and ethics, web technologies, and entrepreneurship; focusing on the usage of technology as a tool for innovation and creativity.

He hosts the Machine Ethics Podcast, which consists of interviews with academics, writers, technologists and business people on the theme of AI and autonomy.He also talks about Machine Ethics.

Episode Links:

Ben Byford’s LinkedIn: https://www.linkedin.com/in/ben-byford/

Ben Byford’s Twitter: @benbyford

Ben Byford’s Website: https://www.benbyford.com/

Podcast Details:

Podcast website: https://www.humainpodcast.com

Apple Podcasts: https://podcasts.apple.com/us/podcast/humain-podcast-artificial-intelligence-data-science/id1452117009

Spotify: https://open.spotify.com/show/6tXysq5TzHXvttWtJhmRpS

RSS: https://feeds.redcircle.com/99113f24-2bd1-4332-8cd0-32e0556c8bc9

YouTube Full Episodes: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag

YouTube Clips: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag/videos

Support and Social Media:

– Check out the sponsors above, it’s the best way to support this podcast

– Support on Patreon: https://www.patreon.com/humain/creators

– Twitter: https://twitter.com/dyakobovitch

– Instagram: https://www.instagram.com/humainpodcast/

– LinkedIn: https://www.linkedin.com/in/davidyakobovitch/

– Facebook: https://www.facebook.com/HumainPodcast/

– HumAIn Website Articles: https://www.humainpodcast.com/blog/

Outline:

Here’s the timestamps for the episode:

(00:00) – Introduction

(01:24) – The big question and a moral quandary which we're battling with is how much information do we give to organizations, governments, about our movements - and that's always been the case - but we're now having to think differently in the face of a pandemic, about how much we can give away, and what kinds of things can be done with that data.

(03:31) – You're really concerned with whom you're giving that data to. And can they be transparent about how they're using that data, and have that data secure, and be able to delete that data when appropriate. And it's very hard to actually believe or have trust in organizations when they say these things. it's a good thing to be doing, but the trust issue is a big one.

(06:51) – Whether Americans have a similar legislation put in place is, in my opinion, irrelevant. Because the internet is cross boundary, cross continental. So, if you deal with anyone outside of your own jurisdiction, your own country, then you will fall into someone else's legislation. And it just so happens that GDPR is one of the most robust that we have at the moment, to do with data.

(09:19) – We should be teaching people to reflect on the situation within our educational institutions, so that we are priming people who are going to be making this stuff in the future, to be making design decisions and technical decisions that they can implement it in full respect of other people, and for the respect of the environment.

(12:20) – We should all be worried about security, as citizens and our data privacy as citizens, because we don't necessarily want to tell everyone what we're talking about, and that comes into our discrimination issue. So, you can be discriminated against in different countries, for all sorts of different things. And you might not want to tell your neighbor or your government certain things about your person, because those things aren't deemed in that country normal or acceptable or legal.

(13:22) – There are many reasons why you would want to keep your privacy and your security intact.You're using a utility, and the utility doesn't respect the user. We're saying water and electricity is a general need, a civil need, I think the internet is certainly up there as a civil need.

(17:40) – As you're building technology, you have to require consent under GDPR. You have to stipulate usage under GDPR, and you have to give terms of access under GDPR. So, if you are to be amended or deleted for your delayed data or have your data shared to the user, what specific data they have on them. All that has to be implemented. And if you don't implement that, then you could be taken to court and sued for a lot of money. Now it's illegal to be doing some of that stuff, but within the ethics of AI and the ethics of technology and kind of the ethics of mass automation, we have to really go beyond what is under GDPR, beyond what is legal, illegal and think about again, what is it the world we're we're making? What is equitable to most people? What is useful to people and what isn't just useful to shareholders.

(22:34) – Face tracking stuff is great. It's a microcosm of what is essentially a really big ethical quandary, which has positive and negative effects. So it is really interesting and really frightening in the same way. You have to create trust. And if it is known that these machines are very good and work very well, and the information maybe doesn't really leave the robot in any meaningful way, or is anonymized in all aspects and isn't actually restricting the citizens mobility we've built something that actually really does work and works enough, and knowing when it works enough is an ethical question. And then also, allowing humans to be in the loop somewhere.

(26:53) – The obvious contradiction here is that the Chinese system seems to be very heavy handed in its use of technology to implement those social norms. We don't really have a similar approach, I don't think, in the West.

(34:02) – You have all these really good applications, all these really interesting applications. And then you have applications which then restrict people's rights or human rights. And again, it might be that we have to look at what human rights actually mean in the digital world.

(38:04) – We want to live in a world where George Floyd or anyone who is discriminated against traditionally in a society can walk up to a police officer, can walk up to a person of power in that society and know that they are going to be trustful, trustworthy, wherever your trust in any situation, you don't want to be in a situation where you are in grave danger and you can't trust your own environment.

(39:11) – There has been a wealth of interest in ethics and technology and, in Data Science and Machine Learning, and AI has just been an explosion. I'm seeing that with the emergence of quite a few workshops and talks and conversations around AI, responsibility, transparency, and diversity and equity and all those sorts of terms. Into the future, I am most interested in how the interaction of moral agencies appears in technologies that we actually use and within society's reaction to it.

(44:19) – Be mindful. We all have our autonomy and we all should be thinking about the things that we are doing, and you should be empowered to think about what you are doing. It's easy for me to say it on this podcast, but please be mindful of how you affect the world.

Advertising Inquiries: https://redcircle.com/brands
Privacy & Opt-Out: https://redcircle.com/privacy

  continue reading

119 פרקים

Усі епізоди

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר