tagging.tech

Audio, Image and video keywording. By people and machines.


Leave a comment

Tagging.tech interview with Mark Sears

Tagging.tech presents an audio interview with Mark Sears on crowdsourcing

 

Listen and subscribe to Tagging.tech on Apple PodcastsAudioBoom, CastBox, Google Play, RadioPublic or TuneIn.

 

Transcript:

 

Henrik de Gyor:  This is Tagging.tech. I’m Henrik de Gyor. Today, I’m speaking with Mark Sears. Mark, how are you?

Mark Sears:  I’m doing great, Henrik, Thank you.

Henrik:  Mark, who are you and what do you do?

Mark:  My name is Mark Sears. I’m Founder and CEO of Cloud Factory. We spend a lot of time leveraging an on-demand workforce to structure data. We take a lot of unstructured data per clients and we process that in the cloud using a combination of human and machine intelligence. We do that for a lot of, mostly tech companies. We work a lot with technology companies that are looking for an API driven workforce to do tons of different use cases very relevant often to tagging tech would be things like tagging images for the purpose of machine learning. Or tagging images in terms of core business processes for things like intelligence. We do transcription and translation. We do a lot of document processing, again, trends like processing receipts and invoices. We do web research going out to do human powered screen scraping for lead generation, serum enrichment.

A lot of different, very tedious, routine, repetitive work. We do it in a bit of a different model. Again, what we refer to as cloud labor. The ability for organizations to send their work to the clouds and have it come back done accurately, quickly, cost-effectively in hours if not minutes. So that’s kind of the world that we claim.

Henrik:  Mark, what are the biggest challenges and successes you’ve seen with crowdsourcing?

Mark:  When we think of crowdsourcing, we often like to look at it compared to maybe more traditional outsourcing model. We actually consider ourselves to be somewhere in between. So, my view of the world is that traditionally having a large number of people working in a delivery center … Offshoring, outsourcing. You need to get work done. This is one option that obviously a lot of companies have used in the last 20 years. Is to send that work to a team, maybe thousands of people that are sitting in urban India, Philippines or China. That’s one way to get a lot of this type of paperwork done.

Another way, that’s more popular, recently, is to send it to a crowd and to do crowdsourcing. Our kind of view of the world is that crowdsourcing and sending out work to anonymous crowds, someone who maybe just signs up online and there’s not a really high level of engagement, accountability or ability to get quality from out of an anonymous, faceless crowd. We see that on one side of the spectrum. We see the other side of the spectrum being a traditional outsourcing. The view of the world that we have is right in between. It’s the idea of having an on-demand workforce that is leveraging automation and is highly efficient because of technology. But, at the same time, is not an anonymous crowd. We actually know and train, professionally managed and curated crowd. I think that’s a roundabout way of talking about how we view the world that I’ve seen and learned through a lot of different projects … The biggest challenge is often quality.

It’s really harnessing the tower of an anonymous crowd is something that’s quite hard to do. So we love kind of playing in the hybrid and finding that radical middle where you get the best of all worlds in terms of quality, scalability, elastically, cost-effectiveness, speed of turn around, etc. to accomplish your large data work projects.

Henrik:  Mark, as of April, 2016, how do you see crown sourcing changing?

Mark:  Moving forward, there’s no question that the rise of robots and the flattening of the world are two major trends that are affecting, not just crowdsourcing, but really the future of work and really how enterprises get their work done. As we think of both of those trends, the world becoming more and more flat because of mostly the internet as well as just the cost of devices to access the internet. We’ve had 1.1 billion people have come online in the last five years’ And there’s another billion expected in the next five years.

So you have this massive, global workforce that are now able to contribute to the tagging, and again, the routine repetitive work that every organization has deep inside that needs to get done. This new, untapped potential in being able to do online work and to leverage the talent that is equally distributed around the world. Again, acknowledging that opportunity is not. And so, we can really flatten the world with the internet with crowdsourcing and other online work approaches.

The other side of it again, is automation and the rise of robots. Any project or solution that is not thinking first how do we automate this … Is going to be left behind. We absolutely have to leverage technology. Automation takes on a different forms. Actually, automating the work itself, using AI, ML, etc, to automate pieces of our tagging, labeling, video, audio, transcribing processing type of workloads is definitely essential to do that. But a lot of the technology just is not there. Looking first to see what pieces can you actually automate.

And then also, of course, there’s the delivery and the receipt of the work. Being able to have the API to be able to send the work in and have it sent back once the work is completed, that automation. Having the automation of the workflow is well to streamline and speed things up and make things more cost-effective.

There’s automating the actual work and there’s the automating of processes of getting the work done and delivering and receiving that one. Really, I see that’s a huge trend that everyone is how do we make this more streamlined, more efficient, faster, more cost effective, less manual touches in these projects to really, really make things more effective. That does include, as well, trying to automate as much of the work that we can do -That’s one thing that we have really seen just the desire and requirement to find the right mix of human and machine intelligence for every project. For every solution. It really is different for every solution.

Trying to automate as much as we can with the approach, but obviously, there’s a lot of nuances in doing, kind of split, AB testing to kind of understand really what is the best, total cost of ownership of the solution depending on how much automation you include. Those are two trends definitely play into the future of getting this type of work done.

Henrik:  Mark, what else would you like to share with people looking into crowdsourcing?

Mark:  I think the key thing is understanding self serve versus full serve. There’s no question there’s power in leveraging a global workforce and accessing online and being able to send your repetitive data projects to a crowd. The question is that there is experience in doing that. A lot of people do like to have a self serve approach and accessing it themselves. Other people prefer to have experts that are there to help along the way in terms of making sure that you’re getting the quality out of the crowd that you’re expecting.

I think that as we look at the landscape, one way, I think somebody should be thinking about their project is, am I ready to do this on my own or is it better to maybe work with a little more enterprise-grade approach? We often encourage people to think about that span. If you’ve got a smaller project that you need done really quick, quality is not the highest priority. It’s going to be more that you just need it done quick and cheap. I think self-serve options to send that work out and get it back it really where you want to be going.

If you have a larger project or an ongoing project, one that requires really getting good, accurate work done, maybe there’s an opportunity to find a portion of that to be automated. All of those things, I think, you want to be looking for a little bit more of an enterprise-grade. Maybe a full service, professional service type approach. I think is a key thing that we would recommend that people think through as they begin to look at crowdsourcing as a way to get their project done.

Henrik:  Mark, where can we find more information about crowdsourcing?

Mark:  Crowdsourcing as a term has definitely been broad and changed. I think the usual source of Googling crowdsourcing is going to lead you in a lot of different directions from crowdfunding to Wikipedia to a lot of different directions. There definitely are some sources that are out there, but there’s not that many players that are really in this space. I think it’s great to take a look at everyone’s approach in terms of how, exactly the tools that they provide access to … Where you’d access the crowd. The services that they provide. How they manage, recruit and train and vet their workforce, their crowd. I think probably the best way is really to get out there and explore some of the different options that are available from different partners.

Specifically, in terms of finding some other places online to learn crowdsourcing.org is one good resource. Specifically, they have a cloud labor tab that has some good information. You can follow along and see how people are leveraging these distributed, virtual labor pools to fulfill a large variety of tasks. That’s one great place. Obviously, our particular take on the world at cloudfactory.com is another option … Thoughts and resources and some articles and such again that help people think through how to really leverage the technology platform with a global workforce to accomplish their large data projects.

Henrik:  Well, thanks Mark.

Mark:  Thank you.

Henrik:  For more on this, visit Tagging.tech. Thanks again.


 

For a book about this, visit keywordingnow.com

 

 


Leave a comment

Tagging.tech interview with Brad Folkens

Tagging.tech presents an audio interview with Brad Folkens about image recognition

 

Listen and subscribe to Tagging.tech on Apple PodcastsAudioBoom, CastBox, Google Play, RadioPublic or TuneIn.

 

Transcript:

 

Henrik de Gyor:  This is Tagging.tech. I’m Henrik de Gyor. Today I’m speaking with Brad Folkens. Brad, how are you?

Brad Folkens:  Good. How are you doing today?

Henrik:  Great. Brad, who are you and what do you do?

Brad:  My name’s Brad Folkens. I’m the CTO and co‑founder of CamFind Inc. We make an app that allows you to take a picture of anything and find out what it is, and an image recognition platform that powers everything and you can use as an API.

Henrik:  Brad, what are the biggest challenges and successes you’ve seen with image recognition?

Brad:  I think the biggest challenge with image recognition that we have today is truly understanding images. It’s something that computers have really been struggling with for decades in fact.

We saw that with voice before this. Voice was always kind of the promised frontier of the next computer‑human interface. It took many decades until we could actually reach a level of voice understanding. We saw that for the first time with Siri, with Cortana.

Now we’re kind of seeing the same sort of transition with image recognition as well. Image recognition is this technology that we’ve had promised to us for a long time. But it hasn’t quite crossed that threshold into true usefulness. Now we’re starting to see the emergence of true image understanding. I think that’s really where it changes from image recognition being a big challenge to starting to become a success when computers can finally understand the images that we’re sending them.

Henrik:  Brad, as of March 2016, how much of image recognition is done by humans versus machines?

Brad:  That’s a good question. Even in-house, quite a bit of it actually is done by machine now. When we first started out, we had a lot of human-assisted I would say image recognition. More and more of it now is done by computers. Essentially 100 percent of our image recognition is done by computers now, but we do have some human assistance as well. It really kind of depends on the case.

Internally, what we’re going for is what we call six-star answer. If you imagine a five-star answer is something where you take a picture of say a cat or a dog. We know generally what kind of breed it is. A six-star answer is the kind of answer where you take a picture of the same cat, and we know exactly what kind of breed it is. If you take a picture of a spider, we know exactly what kind of species that spider is every time. That’s what we’re going for.

Unsupervised computer learning is something that is definitely exciting, but I think we’re about 20 to 30 years beyond when we’re going to actually see unsupervised computer vision, unsupervised deep learning neural networks as something that actually finally achieves the promise that we expect from it. Until then, supervised deep learning neural networks is something that are going to be around for a long time.

What we’re really excited about is that we’ve really found a way to make that work in a way that’s a cloud site that customers are actually happy. The users of CamFind are happy with the kind of results that they’re getting out of it.

Henrik:  As of March 2016, how do you see image recognition changing?

Brad:  We talk a little bit about image understanding. I think where this is really going is to video next. Now that we’ve got some technology out there that understands images, really the next phase of this is moving into video. How can we truly automate and machine the understanding of video? I think that’s really the next big wave of what we’re going to see evolve in terms of image recognition.

Henrik:  What advice would you like to share with people looking into image recognition?

Brad:  I think what we need to focus on specifically is this new state of the art technology. It’s not quite new but of deep learning neural networks. Really we’ve played around…As computer scientists, we’ve screwed around a lot, for decades, with a lot of different machine learning types.

What really is fascinating about deep learning is it mimics the human brain. It really mimics how we as humans learn about the world around us. I think that we need to really inspire different ways of playing around with and modeling these neural networks, training them, on larger and larger amounts of real-world data. This is what we’ve really experimented is in training these neural networks on real-world data.

What we’ve found is that this is what truly brought about the paradigm shift that we were looking to achieve with deep learning neural networks. It’s really all about how we train them. For a long time, when we’ve been experimenting with image recognition, computer vision, these sorts of things. If you look at an applesto apples analogy, we’re trying to train computers very similarly to if we were to shut off all of our senses.

We have all these different senses. We have sight. We have sound. We have smell. We have our emotions. We learn about the world around us through all of these senses combined. That’s what form these very strong relationships in our memory that really teach us about things.

When you hold a ball in your hand, you see it in three dimensions because you’ve got stereoscopic vision, but you also feel the texture of it. You feel the weight of it. You feel the size. Maybe you smell the rubber or you have an emotional connection to playing with a ball as a child. All of these senses combined create your experience of what you know as a ball plus language and everything else.

Computers on the other hand, we feed them lots of two-dimensional images. It’s like if you were to close one of your eyes and look at the ball, but without any other senses at all, not a sense of touch, no sense of smell, no sense of sound, no emotional connection, none of those extra senses. It’s almost like if you’re flashing your eye for 30 milliseconds to that ball, tons of different pictures of the ball, and expecting to learn about it.

Of course, this isn’t how we learn about the world around. We learn about the world around through all these different senses and experiences and everything else. This is what we would like to inspire other computer scientists and those that are working with image recognition to really take this into account. Because this is really where we’ve seen as a company the biggest paradigm shift in image understanding and image cognition. We really want to try to push the envelope as far the state of the art as a whole. This is kind of where we see it going.

Henrik:  Where can we find more information about image recognition?

Brad:  It’s actually a great question. This is such a buzzword these days, especially in the past couple of years. Really, it sounds almost cheesy but just typing in a search into Google about image recognition brings up so much now.

If you’re a programmer, there’s a lot of different frameworks that you can get started with image recognition. You can get started with one of them’s called OpenCV. This is a little bit more of a toolbox for image recognition. It requires a little bit of an understanding of programming and a little bit of understanding of the math and the sciences behind it. This gives you a lot of tools for basic image recognition.

Then to play around with some of these other things I was talking about, deep learning, neural networks, there’s a couple of different frameworks out there. There’s actually this really cool JavaScript website where you can play around with a neural network in real time and see how it learns. This was really a fantastic resource that I like to send people to, kind of help them, give them an introduction to how neural networks work.

It’s pretty cool. You play with it, parameters. It comes up with…It paints a picture of a cat. It’s all in JavaScript, too, so it’s pretty simple and everything.

There’s two frameworks that we particularly like to play around with. One of them is called Cafe, and the other one is called Torch. Both of these are publicly available, open source projects and frameworks for deep learning neural networks. They’re a great place to play around with and learn, see how these things work.

Those are really what people tend to ask about image recognition and deep learning neural networks, that’s the sort of thing. I like to point them to because it’s great introduction and playground to get your feet wet and dirty with this type of technology.

Henrik:  Thanks, Brad.

Brad:  Absolutely. Thanks again.

Henrik:  For more on this, visit Tagging.tech.

Thanks again.


 

For a book about this, visit keywordingnow.com