Tagging.tech presents some of the backstory behind the new book titled Keywording Now by Henrik de Gyor. If your business is thinking about keywording services and/or image recognition, this Amazon Kindle book is the first and only book of its kind on this specific topic. It is the only book on Amazon about keywording.
Henrik de Gyor: This is Tagging.tech. I’m Henrik de Gyor. Today I’m speaking with Jason Chicola.
Jason, how are you?
Jason Chicola: Doing great, Henrik. Thanks a lot for taking the time.
Henrik de Gyor: Jason, who are you and what do you do?
Jason Chicola: I’m the founder of Rev.com. Rev is building the world’s largest platform for work from home jobs. Our mission is to create millions of work from home jobs. Today, we have people working on five types of work. Jobs they could do in their pajamas. And the main ones are audio transcription and closed captioning. Several of my co-founders and I were early employees at Upwork which is the largest marketplace for work at home jobs. Rev takes a different approach than Upwork. With Rev, we guarantee quality which means that the task of managing a remote freelancer, hiring the right one is something that our platform excels at. And so what that means is our customers have a very easy to consume service. You can think of it… you can think of us as Uber for work at home jobs. So if you wanted to come to us to get say for example this call transcribed as you know as a customer all you have to do is upload an audio file to a website and then couple hours later, you’ll get back the transcript. Now behind the scenes, there is an army of freelancers that are doing that work and we have built our technology to make their lives easier and make them productive. If I zoom out from all of this, I look at the world and see a lot of people who are sitting in cubicles that probably shouldn’t have to, while he was in traffic and it shouldn’t have to and I look at what are all the kinds of jobs you will do today at a computer. How many of those jobs need to be done in cube farm. How many of them could be done from home? We think many of them can be done from home. And our mission is to give more people the opportunity and the freedom to work from home which allows them not only to choose the location but also gives people more control over their lives because they can decide whether they want to be the morning versus early afternoon. It means you’re not tied to a single boss or employer. It means that you can work on one skill on Monday and a different skill on go on Tuesday and go surfing or hiking on Wednesday, if you feel it. So that’s how we think about our business is really centered on giving people this freedom that comes when they can be their boss and work from home. And as a segue to some of your next questions that you and I discussed the past, as we got deeper and deeper into creating jobs for transcriptionists, we have invested in technology to make their jobs easier, to make them more productive. And that has led us to develop some competency and familiarity with what you’re calling here AI transcription which means using a computer to transcribe audio so that what I call a relatively new area for us, an important area, especially in light of people being familiar with Amazon Alexa and Apple’s Siri. So that’s a new small business, but the core is giving people work they can do with a computer. Most of that work today listening and typing.
Henrik de Gyor: Jason, what are the biggest challenges and successes you’ve seen with AI Transcription?
Jason Chicola: It’s really early to judge that. I can give you a specific example in a moment. But it’s a little bit like asking someone today what are the biggest challenges and successes of self-driving cars. The answer is I think business cases that they have been small but possible successes in the future could be massive. I really believe that we’re truly… you’re not even in the first inning. Maybe we’re the warm-up for the first inning of this game and I think is going to be a pretty exciting decade ahead of us as computers have gotten better, as more audio is captured in digital formats and companies like Rev are innovating in a bunch of areas. Our success today in this area has been… we had success, but it’s been at the fringes of our business so I’ll give you a specific example: when the Rev transcriptionist type out an audio file like somebody might hear about this phone call, some customers request time stamps and the humans part of their job is to go into note for example at the end of every minute, this event occurred in three minutes, this event occurred four minutes or so forth. That was an additional task they performed manually while they did their job. We automated that using what you could call AI transcription. So now not only time stamps are inserted automatically but every single word is marked by the AI as when it was sent. So literally for every single word, we know this word occurred at 4:38 and get that word occurred at 5:02. So that’s something that we’ve done that automated something previously done manually and it actually made it a much better experience for the customer because the timestamps are more accurate. That something we already have today. The challenge… the challenge list is longer. The biggest challenge to be aware of when it comes to automated transcription is that it’s garbage in, garbage out. Other people say you can’t make chicken salad out of chicken [****] that if you go to Starbucks and you sit outside by a noisy street and you record an interview with someone who you’re talking to for a book and you submit it to some automated engine you’re not going to get back anything that is very good. And that’s I mean it’s obvious why that is, but the quality of speech recognition depends I would say on three or four key factors other than the quality of the [speech recognition] engine itself. One is background noise. The less the better. Another is accents. The less the better. Another is how clearly the person is speaking. Are they annunciating? Are they slurring their words together? Are they speaking really quickly? Those tend to be the major factors. There is probably another one related to background noise which comes down to the quality of your microphone. How far you are from the microphone. You are a podcaster, so you probably know far more about how record clear audio than most people do. Most people throw an iPhone onto a table next to somebody else’s eating a bag of Doritos. [laugh] So you have great audio of someone eating a bag of Doritos which causes problems downstream and some of those people because they don’t think about it will say “Hey, I really annoyed. You didn’t get this word right.” And that’s because somebody was eating a bag of Doritos during the time that word was said. So part of our job… as we try to get better at helping people transcribing quickly and cheaply part of our job is to help customers understand that you need to record good audio if you want to get to get a good outcome.
Henrik de Gyor: Jason, of January 2018, how much of the transcription work is completed by people versus machines?
Jason Chicola: Are you talking to the work that Rev does?
Henrik de Gyor: Sure.
Jason Chicola: Depends on how you slice it, but I’ll say 99% percent people, 1% machine.
Henrik de Gyor: Fair.
Jason Chicola: We actually have…I’ll be a little more clear on that, we recently released a new service called Temi. Temi.com. That is an automated transcription service where people are not doing the work. Machines are and then are core service rev.com is done basically entirely by people. We believe that that’s required to deliver the right level of accuracy. This is I don’t answer your next questions but we clearly see these two blending and merging a little more over time, but today if you want to get good accuracy you need people to do it. If I give you kind of the external contacts in an earnings call used to be transcribed for Wall Street analysts and machine does it and they make a mistake on, you know, a key number or you know, the CFO said that something happened or something didn’t happen, that’s a big problem. Or if a movie is captioned for HBO. Game of Thrones is captioned by HBO. Those captions need to be right. So any use case where people want a transcript that is accurate today, they need to have people in the loop.
Henrik de Gyor: Make sense. Jason, as of January 2018, how do you see AI transcription changing?
Jason Chicola: I think the most obvious change that people expect and is probably been slower coming then people expect is that the machine is going to help the man in this proverbial battle of the man versus the machine… wasn’t there kid’s story about John Henry the guy that fought the train. And you know, there is this narrative in popular culture that robots are going to take our jobs. And there are sectors where that has been a big problem for people. I think that there’s a broader trend in more industries where technology seeps into our day to day lives in little ways that help us to eliminate the parts of our jobs that suck. The printing photocopies and running to Office Depot and you know changing format for documents. Those things go away what I expect to see is more transcription happening in a two-step process where first machine takes a cut, then the human tweaks it.
Jason Chicola: There are some companies that have tried this in the past that by in large didn’t do well because their quality sucked. There are some companies do this well in the medical transcription space. But the thing that I would….the trend that I would encourage your listeners to look for is a trend that is not my idea. There’s a book called Race Against the Machine its written by a couple academics out of the MIT Sloan School and in the final chapter they are talking about the rise of automation and AI and how it’s going to affect the economy and jobs broadly. In the last chapter, they concluded that they believe that rather than having a showdown against the machines, the best companies were going to be the ones that found a way to in their words “race with the machines,” that the machines should help people do their jobs better and so I would look for examples where software can make people better at their jobs. And I think that that is the trend to keep an eye on. I don’t think…there are people that say “AI is going to do everything.” Well, to those people, I would say are you saying that quality is going to stop mattering? Is HBO going to start caring about the quality of the captions? Is the Wall Street analyst firm that is… Is the Wall Street trader who is reading earnings calls as they come in so he can decide whether to buy or sell a given stock, is he going to want more mistakes in these documents? I don’t think so. I think these people want accuracy and I think humans are needed to deliver accuracy.
Henrik de Gyor: Jason, What advice would you like to share it with people looking into AI transcription?
Jason Chicola: Well, it depends on what their objective is. But if somebody has audios recordings or meetings or dictation that they want to use productively, I would certainly recommend that they try our service Temi.com. In fact, right now, if you download our mobile app which is available on both iOS and Android, all transcriptions submitted through our mobile app are free for limited time. I want to repeat that. You can get free unlimited transcription for a limited time through the Temi mobile app on iOS and Android. That’s a good place to start because that doesn’t cost you much. Beyond that, so that was this is a self-serving comment. There are transcription engines available today by a variety of companies, some of them well known and large for example Google has one under the Google Cloud products. You can play with that. Amazon has announced a couple products related to transcription. They have one called Amazon Transcribe which I don’t believe has formally launched at scale. It might be like a private beta, but that’s going to keep an eye on. They also have a product called Amazon Lex. If you want… If you were a software developer… wanted to build an Alexa-like app that you control the voice commands Amazon Lex was is designed to help you with that. There are some smaller companies in the space as well if you google, you’ll find them, but I would probably give those companies as good reference points for people trying to figure out the category.
Henrik de Gyor: Excellent. And Jason, where can we find out more information about AI transcription?
Jason Chicola: The Temi blog has some good information. So if you go to Temi.com and you click on a blog link in the footer, there’s a bunch of articles that address topics that we think are interesting. Beyond that, googling is great. You know, there are some more specialized publications in the speech world. Most of them are too technical for a general audience. There is a conference called Speech Tek that is pretty good. We’ve been a couple times for some of those really serious about it. But I think between those resources and googling somebody is probably in pretty good shape. If folks have large needs to transcribe a lot of audio, contacting Rev/Temi is a good idea because we can often point you in the right direction.
Henrik de Gyor: Well, Thanks, Jason.
Jason Chicola: It’s really a pleasure to chat today. I know I really believe that 2018 is going to be marked as the first year that transcriptionist start to use on the importance scale. Everybody that I know has probably a couple of these listening devices in their home. Everybody I know is really struggling with Siri and people are starting to think about how to use voice differently. Transcription is… We talked today about transcription and that’s how we framed the conversation. And I think that that is a fine framing, but its a bit backward looking. If I look into the future, I think that there is a whole new behavior that are likely to happen. So when I or a colleague of mine are driving to the office and I have I know how important meeting or presentation or board meeting later in the week. Shouldn’t I be effectively dictating notes to self that I can use later in that presentation? Shouldn’t I be trying to talk more than I type and use an app to build nodes knowledge and insights?
Jason Chicola: I think that transcription implies an existing recording off the shelves whereas using voice to be more productive I think is going to be a major behavior change that we’re likely to see in the next couple of years and we’re trying to accelerate that with our products. Clearly, there are other companies out there as well and we wish everyone luck. I think it’s a big space, but I’m glad that we able to have this conversation because hopefully, you know we listen to this conversation in a couple years from now, hopefully, we’ll have gotten a couple things right.
Henrik de Gyor: Awesome. Well thank you for leading this voice first revolution and thanks again.
Henrik de Gyor: This is Tagging.tech. I’m Henrik de Gyor. Today, I’m speaking with Emily Klovitz.
Emily, how are you?
Emily Klovitz: I’m doing great. How are you, Henrik?
Henrik: Good. Emily, who are you and what do you do?
Emily: I’m a DAM consultant, marketer, and digital asset manager for Bynder. We’re an award-winning digital asset management software that allows brands to create, find, and use content such as documents, graphics, and videos. Before joining Bynder, I worked as a digital asset manager for JC Penny. I have MLIS, my masters in library information studies from the University of Oklahoma. I’ve worked with hundreds of different clients on their DAM implementations, providing best practice and consultation. Because I work with clients, I’m often able to see the very real world implications of what AI tagging can actually be like with live collections of content. The successes and challenges are very real, very tangible, and that’s not always something that you see when you’re watching a webinar or a product demo.
Henrik: Emily, what are the biggest challenges and successes you’ve seen with image recognition?
Emily: For challenges, of course, there are some challenges and opportunities for improvement when it comes to AI tagging. I think many of them have to do with the application and configuration of the AI, not necessarily the technology itself. Today, once specific limitation currently in our own implementation of AI, we only have US American English tags at this time, so we wanted to make a claim on the AI space very quickly, so English to start with was part of our MVP for AI features. Obviously, there’s more to come in the future. I think some other limitations include things like only certain file types are scanned, such as JPEG and IMG, so there’s an opportunity to extend this out to things like video, documents, etc. Many other companies are already doing this, companies like Ancestry.com for example or even DocumentCloud, which scans your documents through Thomas Reuters Open Calais to extract entities, topic codes, events, relations, social tags. In addition, there’s a full list of AWS limitations on the recognition site as well, which is what we use. But in terms of what more general things I think need to be considered challenges are things like mistakenly tagging something in a way that’s hurtful or harming in some manner. Those are things that don’t usually become apparent until after the fact. I think that AI tagging is very much in its infancy in terms of its application and that we’ll see it greatly grow and mature in the coming years where we may start to see challenges like information and privacy concerns pertaining to facial recognition. Being able to opt out of these things will basically be a big need for clients.
As far as successes go, AI tagging detects objects, scenes, and can identify thousands of objects such as vehicles, pets, furniture, and it provides the confidence for, which simply tells you how confident the AI is that that tag is relevant and accurate. It’ll detect scenes within an image, so things like a sunset or a beach. This has really big implications for search filtering and curating very large image libraries. From my perspective alone, the time-saving factor for DAM managers, digital asset librarians, content managers, and admins of the system is probably one of the biggest successes for AI tagging. They spend an enormous amount of time and resources on metadata application alone. It’s tedious thankless work, but absolutely necessary so that people can find the assets they need.
In terms of other things, I think it’s also helping to put a minimum viable metadata on a very large digital asset collection that may otherwise remain untagged. For DAM, it means that uploaded images get auto-tagged, helping with categorization, identification, and searchability of assets that could possibly otherwise be buried in the depths of your collection without metadata.
Henrik: Emily, as of July 2017, how do you see image recognition changing?
Emily: Becoming a defacto feature of digital asset management systems and less of a fun/nice to have feature, like more of a novelty feature, it’s becoming something you have to have.
Henrik: What advice would you like to share with people looking into image recognition?
Emily: This is a good one. If you can, provide a sample of your assets to different vendors and ask for results. It’s very easy to see a webinar or a product video showing 100% accuracy and it’s really neat, but it’s also really important to try out a wide variety of image assets to see where the real limitations are for each image type and the associated algorithms.
Henrik: Where can we find more information?
Emily: There’s lots of places on the internet you can find more information about AI tagging. You can find information from us specifically on our blog, blog.bynder.com. Amazon’s recognition website has a great FAQ that you can check out. We also did a presentation at the photo metadata conference in Germany, the IPTC Metadata Conference on image recognition and AI. There’s a PDF and a video available of this presentation on IPTC.org.