Posts in Biased decisions
Kriti Sharma: rendre l’intelligence artificielle plus éthique - Business au Feminin

Vice-présidente Bots et intelligence artificielle chez Sage, Kriti Sharma est une pionnière dans le développement de machines intelligentes capables de fonctionner et de réagir comme des êtres humains pour simplifier les tâches administratives des entreprises. Elle est aussi la créatrice de Pegg, le premier chatbot de comptabilité au monde qui sera sera commercialisé en 2018 en France et désormais adopté dans 135 pays.

L’intelligence artificielle est une des plus grandes révolutions de notre temps pouvant mettre en danger le pouvoir de l’être humain et son travail. Quel est votre point de vue ?

Kriti Sharma: L’intelligence artificielle est comme n’importe quelle autre révolution technologique majeure, elle aura des implications positives comme négatives. Maintenant, il faut être sûr qu’elles sont utilisées à de bonnes fins. Par exemple pour les petites entreprises qui n’ont pas beaucoup d’équipes technologiques, l’intelligence artificielle peut les aider à automatiser un certain nombre de process.

Par ailleurs, la technologie attire une main d’œuvre de plus en plus diversifiée, ce qui n’existait pas auparavant. L’intelligence artificielle peut également s’automatiser elle-même. Avant, créer un software prenait du temps, maintenant, l’IA commence à écrire ses propres codes. Elle peut, dans une certaine mesure, automatiser le travail de l’ingénieur software. Donc nous avons maintenant un besoin de gens aux compétences créatives, plus seulement des ingénieurs mais une combinaison de profils Art et Science.  Autrement dit, vous n’avez pas besoin d’être un ingénieur ou un Data scientifique avec un master pour travailler dans l’intelligence artificielle.

Dans « the end of the professions » David Susskind évoque des professions comme les avocats, qui vont être impactées par l’automatisation et l’intelligence artificielle.  Ne pensez-vous pas que cela va accroitre les inégalités à l’échelle mondiale ?

Read More
Artificial intelligence could hardwire sexism into our future. Unless we stop it- WEF Blog

In five years’ time, we might travel to the office in driverless cars, let our fridges order groceries for us and have robots in the classroom. Yet, according to the World Economic Forum’s Global Gender Gap Report 2017it will take another 100 years before women and men achieve equality in health, education, economics and politics.

What’s more, it's getting worse for economic parity: it will take a staggering 217 years to close the gender gap in the workplace.

How can it be that the world is making great leaps forward in so many areas, especially technology, yet it's falling backwards when it comes to gender equality?



Read More
Microsoft Researcher Details The Real-World Dangers Of Algorithm Bias

However quickly artificial intelligence evolves, however steadfastly it becomes embedded in our lives -- in health, law enforcement, sex, etc. -- it can't outpace the biases of its creators, humans. Microsoft Researcher Kate Crawford delivered an incredible keynote speech, titled "The Trouble with Bias" at Spain's Neural Information Processing System Conference on Tuesday.

Read More
Working for the algorithm Machines will help employers overcome bias - The Economist

Who is best placed to judge a firm’s workers? In 2018 employees everywhere will increasingly feel the effects of the rise of “talent analytics”, also known as “people analytics”, as they go about their daily work. Having been relatively slow compared with other corporate departments in making use of big data, in 2018 human-resources (HR) folk will become its most enthusiastic proponents—with significant implications for who gets hired, what they are paid and whether they are promoted. Employees will have to get used to being (often unwitting) guinea pigs in frequent HR experiments. And wise ones will think ever more carefully about how they express themselves in e-mails and on digital collaborative-working platforms such as Slack.

One reason is the pressure HR executives will face to make workplaces better for women and minority groups. The limitations of established approaches, such as training and awareness programmes, had caused “diversity fatigue” to set in. But it has become a corporate priority again after shocking headlines in 2017 about sexual discrimination and harassment in Silicon Valley, Hollywood, professional sports and big media firms, which reminded the world that bad corporate culture is a serious business risk. 

Read More

There’s no lack of reports on the ethics of artificial intelligence. But most of them are lightweight—full of platitudes about “public-private partnerships” and bromides about putting people first. They don’t acknowledge the knotty nature of the social dilemmas AI creates, or how tough it will be to untangle them. The new report from the AI Now Institute isn’t like that. It takes an unblinking look at a tech industry racing to reshape society along AI lines without any guarantee of reliable and fair results.

The report, released two weeks ago, is the brainchild of Kate Crawford and Meredith Whittaker, cofounders of AI Now, a new research institute based out of New York University. Crawford, Whittaker, and their collaborators lay out a research agenda and a policy roadmap in a dense but approachable 35 pages. Their conclusion doesn’t waffle: Our efforts to hold AI to ethical standards to date, they say, have been a flop.

“New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI,” they write. When tech giants build AI products, too often “user consent, privacy and transparency are overlooked in favor of frictionless functionality that supports profit-driven business models based on aggregated data profiles…” Meanwhile, AI systems are being introduced in policing, education, healthcare, and other environments where the misfiring of an algorithm could ruin a life. Is there anything we can do? Crawford sat down with us this week for a discussion of why ethics in AI is still a mess, and what practical steps might change the picture.

Read More
A Study Used Sensors to Show That Men and Women Are Treated Differently at Work - HBR

Gender equality remains frustratingly elusive. Women are underrepresented in the C-suitereceive lower salaries, and are less likely to receive a critical first promotion to manager than men. Numerous causes have been suggested, but one argument that persists points to differences in men and women’s behavior.

Which raises the question: Do women and men act all that differently? We realized that there’s little to no concrete data on women’s behavior in the office. Previous work has relied on surveys and self-reported assessments — methods of data collecting that are prone to bias. Fortunately, the proliferation of digital communication data and the advancement of sensor technology have enabled us to more precisely measure workplace behavior.

We decided to investigate whether gender differences in behavior drive gender differences in outcomes at one of our client organizations, a large multinational firm, where women were underrepresented in upper management. In this company, women made up roughly 35%–40% of the entry-level workforce but a smaller percentage at each subsequent level. Women made up only 20% of people at the two highest seniority levels at this organization.

Read More
Your Data Are Probably Biased And That's Becoming A Massive Problem Beware of black boxes - INC

Nobody sets out to be biased, but it's harder to avoid than you would think. Wikipedia lists over 100 documented biases from authority bias and confirmation bias to the Semmelweis effect, we have an enormous tendency to let things other than the facts to affect our judgments. We all, as much as we hate to admit it, are vulnerable.

Machines, even virtual ones, have biases too. They are designed, necessarily, to favor some kinds of data over others. Unfortunately, we rarely question the judgments of mathematical models and, in many cases, their biases can pervade and distort operational reality, creating unintended consequences that are hard to undo.

Yet the biggest problem with data bias is that we are mostly unaware of it, because we assume that data and analytics are objective. That's almost never the case. Our machines are, for better or worse, extensions of ourselves and inherit our subjective judgments. As data and analytics increasingly become a core component of our decision making, we need to be far more careful.

Read More
Artificial Intelligence—With Very Real Biases-WSJ

According to AI Now co-founder Kate Crawford, digital brains can be just as error-prone and biased as ours.

What do you imagine when someone mentions artificial intelligence? Perhaps it’s something drawn from science-fiction films: Hal’s glowing eye, a shape-shifting terminator or the sound of Samantha’s all-knowing voice in the movie “Her.”

As someone who researches the social implications of AI, I tend to think of something far more banal: a municipal water system, part of the substrate of our everyday lives. We expect these systems to work—to quench our thirst, water our plants and bathe our children. And we assume that the water flowing into our homes and offices is safe. Only when disaster strikes—as it did in Flint, Mich.—do we realize the critical importance of safe and reliable infrastructure.

Read More
Taking control of your unconscious bias? Guardian/HSBC

With attention now a scarce resource, we increasingly rely on algorithms to help us navigate the world. Only now are we beginning to experience the side-effects of these filter bubbles as our ability to see and understand the bigger picture is eroding.

Part 1: Six key unconscious biases when making decisions

Dr Norma Montague cites five key unconscious biases to be aware of when making decisions. We’ve added a sixth for good measure.

Full article:

Read More