bcr podcast 111: Mad about Lucy


Welcome to the Better Communication Results podcast 111.

What is artificial general intelligence?

We recently read a lengthy journey into artificial intelligence that started with the fact that artificial general intelligence (AGI) doesn’t yet exist.

AGI is the stuff of fiction, of machines being as smart, then smarter, than the best brains of the human population. Researchers in the field say they are a long way off, maybe many decades or more, before a machine can approach the general intelligence of humans. Although Ray Kurzweil, the famed technology futurist, is convinced that AGI will emerge around 2045. Ray has a history of being just about right with his technology predictions.

Current artificial intelligence is good at narrow tasks: driving a specific car, or answering one specific question. But it’s not any good at combining tasks. So, for example, a human who can read Mandarin can probably also speak it. But for an artificially intelligent machine, both tasks are two separate and distinct processes; just because one AI machine can read Mandarin, it can’t be assumed that it can also speak it.

Another example: walking into someone else’s house and noticing the coffee machine, finding some cups, getting some milk from the fridge, and making a cup of coffee is relatively easy for an adult. It is currently an impossible set of tasks for a robot.

So we might be moving slowly towards some sort of smarter, cross-modal, cross-function artificial intelligence, but we are a long way off from Robocop.

Source: https://www.zdnet.com/article/what-is-artificial-general-intelligence/


We’re mad about Lucy

Our friend Geoff Livingston is back with another cracker of an article. This time he’s scored an interview with one of the founders behind Lucy, a multiple-algorithm tool that brands such as BMW have used to great effect.

Now just three years old, Lucy has advanced to become a powerful segmentation, advertising and content optimization tool for marketers.

Part of what drives the team at Equals3 is their quest to democratise data. There is no point, they say, in having knowledge only in the hands of a tiny few specialists, like analysts, or CIOs or marketers. If, for example, the company website is an important business tool, then let everyone know how it is performing today, against yesterday and last month. Lucy helps those democratising conversations take place, by freeing up the results of the algorithms.

“I think what you’re going to have is those businesses that don’t use AI at all are going have issues. Those businesses that over-rely on AI and say, “Hey, I don’t need as many people, I’ve got the AI to do it.” They’re going to have issues.”

 – Scott Litman, Equals3

“We’re seeing that Lucy is simply a tool of automation. She is allowing people to get stuff done more efficiently and get to better results faster. And we have yet to see any job displacement, but we have seen people increase their bandwidth and throughput and be able to get more stuff done.” 

Scott Litman, Equals3

You can find a link to Geoff’s post in the show notes.

Sources: https://medium.com/datadriveninvestor/talking-about-lucy-and-marketing-ai-7ede88ac07c1


It’s a long way to the top…

But having said we are a long way away from machines with human intelligence, one thinker believes we might be closer to human machines and Kurzweil’s 2045 timeline than we think

Recent tech advances, such as human-sounding AI tools, highlight why companies have to establish ethical standards in the use of AI.

The live demonstration of Google’s Duplex AI system, complete with human-like voice interactions, shows us that artificial voices can be just as real as human ones. In the past, we’ve not worried about computer-generated voices, because they were so obviously inhuman. Now, with Duplex, we are facing a future where the vulnerable are open to abuse.

“Imagine a scenario where fake AI calls clog up emergency phone lines, or an AI phishing scam that calls people pretending to be from a bank to get their card details. It would be a Duplex-powered disaster” 

David Okuniev, Typeform

That is why there is a growing call for some ethics standards to be built into artificial intelligence, although how that will be achieved in practice is anyone’s guess. If Russia can influence elections and create fake anti-vax posts, then why can’t they or North Korea create rogue AI?

Source: https://www.informationweek.com/big-data/the-future-of-ai-is-more-human-than-we-think/a/d-id/1332634


Using AI to benefit your email marketing

Email is the default tool for online marketing, and many marketers swear by its efficacy. Georgine Anton from Accenture offshoot MXM talks about the inroads AI is making into customising emails for better results.

“It may seem strange that the best way to personalize email is by automating the process of creating it. Unless your email list is less than 100, the reality is that personalizing is too hard for humans to accomplish. Artificial intelligence can do that manual work on behalf of marketers, leading to messages that are conceived with the end user in mind. That’s the promise of artificial intelligence-based marketing”

Georgine Anton, MXM

Georgine explains how automation can use the language of the recipient, gathered from their social profiles and previous email correspondence, to create compelling messages and calls to action.

Source: https://www.adweek.com/brand-marketing/how-brands-can-use-ai-to-boost-their-email-marketing-strategy/


And lastly, on reading your employees emails

Text analytics is a growing field for employers. Analysis of employee emails can bring clarity as to how the employees feel about the organisation and those chosen to run it.

US-based research company KeenCorp analyses a company’s emails to get a feel for areas of potential concern. As an example, using publicly-available email data from Enron, they correctly saw a drop in employee satisfaction around two years before Enron’s spectacular explosion. But they were at a loss to explain it. Nothing in the numerous books published about the collapse mentioned anything that would hint at why employee engagement dipped so strongly. It took an interview with a former Enron executive to uncover what the drop was about.

It turns out that the CEO and the Board engaged in some illegal cover-up activity over some poor performing assets. Some employees became aware of what was going on and their emails, although not talking directly about it, hinted at their displeasure. This was the drop that the KeenCorp researchers had uncovered.

Text analysis isn’t new. The finance and stockbroking industry has long been intently looking at company reports, disclosures and the like and attempting to ‘read the tea leaves’ with an accuracy that humans alone cannot match. But what IS new is companies looking at their own employees’ emails to determine if there are trouble spots brewing. Like, for example, a manager starting an affair with a subordinate, or audit concerns that accountants have failed to flag.

KeenCorp say they strip out any identifying information from their data, so no-one can get blamed for something. Companies, they say, have to find some other way of identifying employees. Although, of course, in most companies the organisation itself owns the email, not the employee. But it is the brave employer that willingly lets employees think that Big Brother is watching them.

Source: https://www.theatlantic.com/magazine/archive/2018/09/the-secrets-in-your-inbox/565745/


And that’s it for this edition of the Better Communication Results podcast. Subscribe to our podcast in iTunes, or over at SoundCloud, and until next time, remember to communicate with passion.

We post updates about once or twice a week, usually at the beginning of the week.