Microsoft quietly released a little feature and suddenly it caused outrage

[ad_1]

microsofts-unified-office-mobile-app-wha-5da725fadc406100013edcab-1-oct-25-2019-19-25-39-poster.jpg

Should technology hide who you really are?

Not so long ago, I watched two people I really like enduring a blazing argument.b

The subject? Being woke.

This recently ubiquitous term has incited so many emotions. When that happens, there’s no longer a real definition of what it means.

Even if there ever was.

Which was why I was moved by the sudden debate around a little-known Microsoft feature that’s actually been around for almost two years.

It’s a feature within the newest versions of Word in Microsoft 365. Within the Editor tool, this one underlines words and phrases deemed to be, well, “problematic.”

So much in people’s lives is problematic. It would be wonderful if technology could solve more of them. But this feature seems a touch odd.

Enter the robust male view.

Given that it’s been around since March 2020, you’d think that someone on one or other side of the cultural divide would have already offered a loud “Hurrah, Americah!” or snorted “Oh, God, not this” before now.

Also: Microsoft says this is the ultimate truth about Windows 11. I still don’t get it

But it took the Daily Mail, of all publications, to emit a little stink in Microsoft’s direction. The paper calmly explained how Redmond’s “woke filter was capturing words like mankind, blacklist, whitewash, mistress and even maid.”

The Mail even scoffed that former British Prime Minister Margaret Thatcher couldn’t be referred to as “Mrs. Thatcher.” No, she is now “Ms. Thatcher.” I’m not sure she’d have liked that.

And then there’s “dancer” being not inclusive, while “performance artist” is.

Naturally — or, some might say, thankfully — this problematic-solver is an opt-in selection. It doesn’t autocorrect. It merely whispers gently that you may be sounding like someone not everyone will like. Or that only particular people will like.

The feature also offers alternative suggestions across subjects such as age bias, cultural bias, ethnic slurs, gender bias and racial bias.

Microsoft says it performs due diligence when it comes to deciding which words and phrases need a prompt and which don’t. Which sounds like an extremely difficult — and painfully subjective, at times — job.

Several glances at the Mail’s comments section offered several tones of outrage.

Sample One: “If you can’t control thoughts, control language. George Orwell, 1984.”

Sample Two: “but don’t worry, you can turn it off—————-for now.”

And please don’t get me started on the social media reactions.

It all left me feeling a little twisted. It’s bad enough having to autocorrect, which still doesn’t always (often? ever?) understand what you’re trying to say. Now, you can have another potentially self-righteous thing looking over your shoulder?

And since when has Microsoft’s grammar police force offered useful suggestions? I tried it once. Never again. (That two-word sentence is probably bad grammar.)

But, wait. Who’s using this?

Ultimately, I wondered one mere thing: Who actually wants this?

Is it people who live in fear of offending?

Is it companies who live in fear of offending?

Is it those who genuinely have no idea which words and phrases are acceptable in a business sphere (or any other) and which aren’t?

Is it people who are desperate to be politically correct — in a business sense, not merely a political sense — because they believe their careers may depend on it?

Is it people at Microsoft?

In which case, this Microsoft feature ought to embrace a lot more words and phrases than simply those that suggest an obvious lack of inclusivity. 

The possibilities are profound, as well as problematic. I’m not sure even Microsoft knows what to do with it all.

It’s not as if it’s made it a default — or even easy to locate. It’s buried under “Grammar and Refinements.”

Software that awakens. Is it a good thing?

But ultimately, isn’t it all a little sad?

Not because one doesn’t want everyone to be inclusive — it really is about time that happened — but because you’d wish the responsibility wasn’t abdicated to technology.

What’s even more well, interesting is the peculiar freedoms this tool offers. You can turn on, say, the gender-bias checker while turning off the one for ethnic slurs.

One can imagine the mental contortions for the user: “How would I like to sound today? Like a sexist bigot or a woke bigot?”

And what if you’re in a corporate context and your IT administrator can identify your settings?

They might mutter: “Aha, Henry fears he’s racist, but he’s really strong on gender inclusivity.”

Some might wonder whether this feature allows people to be even more fake at work than they already are.

Many may feel it’s more uplifting to know who you’re working with and to get a strong sense of what they’re really like.

If, instead, they’re using technology to mask aspects of their true selves, is it a little dispiriting? Or is it merely part of modern life?

Technology has made it so much easier to fake so much. We’re constantly on our guard, wondering what’s real and what isn’t.

Would you rather know, when you see something that someone has written, that it’s a reflection of their true voice and self?

Why rely on technology to Botox your lines?

Your writing is you, right?

That’s why you always begin emails with “Hi! I hope this finds you well.”

[ad_2]

Source link