tagging.tech

Audio, Image and video keywording. By people and machines.


Leave a comment

Tagging.tech interview with Joe Dew

Tagging.tech presents an audio interview with Joe Dew about image recognition

Listen and subscribe to Tagging.tech on Apple PodcastsAudioBoom, CastBox, Google Play, RadioPublic or TuneIn.

Keywording_Now.jpg

Keywording Now: Practical Advice on using Image Recognition and Keywording Services

Now available

keywordingnow.com

 

Transcript:

 

Henrik de Gyor:  This is Tagging.tech. I’m Henrik de Gyor. Today, I’m speaking with Joe Dew. Joe, how are you?

Joe Dew:  I’m well. How are you?

Henrik:  Good. Joe, who are you and what do you do?

Joe:  I am the Head of Product for a company called JustVisual. JustVisual is a deep learning company focused on computer vision and image recognition. We’ve been doing this for almost eight years. What my role is in the company is…think of me as the interface between engineering and computer vision scientist and end customers.

We have a very deep technology bench and technology stack that does very sophisticated things, but translating a lot of that technology and capabilities to end‑consumers can be a challenge. Likewise, we have customers who are interested in the space, but aren’t really clear how to use it. My role is to translate their needs into requirements for engineering.

Henrik:  Joe, what are the biggest challenges and successes you’ve seen with image and video recognition?

Joe:  I think the biggest challenge is, for a little perspective, is that the human brain has evolved for millions of millions of years to be able to handle and process visual information very easily. A lot of the things that we as humans can recognize and do ‑‑ even a two‑ or three‑year‑old child can do ‑‑ is actually quite difficult to do for computers and takes a lot of work.

The implication of this is that the expectations from users on precision and accuracy when it comes to visual recognition is very, very high. I like to say there’s no such thing as a visual homonym.

Meaning that, if you did a text search, for example, and you typed in the word jaguar and it comes back with a car, and it comes back with a cat, you can understand why the search result came back that way. If I had asked the question with a visual ‑‑ if I queried a search engine with an image ‑‑ and it came back with a car when I meant for a cat it would be a complete fail.

When we’ve done testing with users, on visual similarity for example, the expectations of the similarity is very, very high. They expect something like almost an exact match when they’re asking. It’s largely because we, as humans, expect that. Again, if you think about how we interact with the world digitally, it’s actually a very unnatural thing.

When you search for things, you have to translate that, oftentimes, into a word or a phrase. You type it into a box and it returns words and phrases at which point you then need to translate again into the real world.

In the real world, you just look at something, you say, “Hey, I want something like that.” It is a picture in your mind, and you expect to receive something like that. What we’re trying to do is solve that problem, which is very tricky thing for computers to do at this point. But, having said that, in the field there’s been tremendous improvements in this capability.

Companies from Google to Facebook to Microsoft, for example, are doing some very interesting work in that field.

Henrik:  Joe, as of March 2016, how do you see image in video recognition changing?

Joe:  I think the three big factors that are impacting this field is increasing rise in processing power of a hardware, just the chip technology, Moore’s law, that type of thing.

Secondly is a vast improvement in the sophistication of algorithms or, specifically, deep learning algorithms that are getting smarter and smarter in training.

The third is, the increase in data. There is just so much visual data now ‑‑ which has not been true in years past ‑‑ that can be used for training and for increase in precision and recall. Those are the things that are happening on the technology field.

The translation of all of these is the accuracy of image recognition and, for that matter, video recognition will see exponential improvements in the next few months even, let alone years. You started to see that already. You start seeing that in the client‑side applications and robotics, websites, and the ability to extract pieces out of an image and see visually similar results.

Henrik:  Joe, what advice would you like to share with people looking at image and video recognition?

Joe:  I think the understanding the use case is probably the most important thing to think about. Oftentimes, you hear about the technology and what it can do, but you need to really think thoroughly about what, exactly, do you want the technology to do.

As an example, a lot of the existing technology today does what we called image recognition, or the idea of taking an image or a video clip and essentially tagging it with the English language words. Think of it as translating an image into text. That’s very useful for a lot of cases, but oftentimes, from a use case ‑‑ from a user ‑‑ it’s not that useful.

If you take a picture of a chair, for example, and it returns back chair, the users says, “I know it’s a chair. Why do I need this technology to tell me it’s a chair?” But, “What I’m really looking for is a chair that looks like this. Where can I find it?” That is a harder question to answer, and that is not an exercise where you’re simply translating it to words.

We found that there are companies that use Mechanical Turk techniques, etc. to essentially tag images, but users have not really adopted to that because, again, it’s not that useful. That’s one thing, is think about the use case of what exactly do you want the technology to do.

A lot of the machine learning and deep learning systems involve a lot of training. The other part you need to think about is, what do you want the algorithm to train for? Is it simply tagging or is it to extract certain visual attributes? Is it pattern? Is it color? What is it that you actually want the algorithm to see, essentially?

Then the third area is, right now, user adoption of the technology is still pretty low. I think that as it becomes broader and broader and more commonplace, you start seeing it in more and more applications, it will increase in adoption, but the concept of using an image as a query is still very foreign to most people.

When you say visual search, it doesn’t really mean anything to them. There’s a whole user adoption curve that has to happen before they can catch up to the technology.

Henrik:  Where can we find out more information about image and video recognition?

Joe:  You can go to our site, justvisual.com, to give you some background of what we do. There’s just a lot of interesting companies and researches happening right now in the field. It’s little bit all over the place, so there isn’t necessarily one place that has all the information, because the field is changing so quickly. It’s exciting times for this field.


 

For a book about this, visit keywordingnow.com


Leave a comment

Tagging.tech interview with Nicolas Loeillot

Tagging.tech presents an audio interview with Nicolas Loeillot about image recognition

 

Listen and subscribe to Tagging.tech on Apple PodcastsAudioBoom, CastBox, Google Play, RadioPublic or TuneIn.

Keywording_Now.jpg

Keywording Now: Practical Advice on using Image Recognition and Keywording Services

Now available

keywordingnow.com

 

Transcript:

 

Henrik de Gyor:  This is Tagging.Tech. I’m Henrik de Gyor. Today, I’m speaking with Nicolas Loeillot. Nicolas, how are you?

Nicolas Loeillot:  Hi, Henrik. Very well, and you?

Henrik:  Great. Nicolas, who are you, and what do you do?

Nicolas:  I’m the founder of a company which is called LM3Labs. This is a company that is entering into its 14th year of existence. It was created in 2003, and we are based in Tokyo, in Singapore, and in Sophia Antipolis in South France.

We develop computer vision algorithm software, and sometimes hardware. Instead of focusing on some traditional markets for this kind of technology, like military or security and these kind of things, we decided to focus on some more fun markets, like education, museums, entertainment, marketing.

What we do is to develop unique technologies based on computer vision systems. Initially, we are born from the CNRS, which is the largest laboratory in France. We had some first patents for triangulations of finger in the 3D space, so we could very accurately find fingers a few meters away from the camera, and to use these fingers for interacting with large screens.

We thought that it would be a good match with large projections or large screens, so we decided to go to Japan and to meet video projector makers like Epson, Mitsubishi, and others. We presented the patent, just the paper, [laughs] explaining the opportunity for them, but nobody understood what would be the future of gesture interaction.

Everybody was saying, “OK, what is it for? There is no market for this kind of technology, and the customers are not asking for this.” That’s a very Japanese way to approach the market.

The very last week of our stay in Japan, we met with NTT DoCoMo, and they said, “Oh, yeah. That’s very interesting. It looks like Minority Report, and we could use this technology in our new showroom. If you can make a product from your beautiful patent, then we can be your first customer, and you can stay in Japan and everything.”

We went back to France. We met the electronics for supporting their technology. Of course, some pilots were already written, so we went back to NTT DoCoMo, and we installed them in February 2004.

From that, NTT DoCoMo introduced us to many big companies, like NEC, DMP, and some others in Japan, and they all came with different type of request. “OK. You track the fingers, but can you track the body motion? Can you track the gestures? Can you track the eyes, the face, the motions and everything?”

We made a strong evolution of the portfolio with something like 12 products today, which are all computer vision‑related, which are usually pretty unique in their domain, even if we have seen some big competitors like Microsoft [laughs] on our market.

In 2011, we were the first to see the first deployment of 4G networks in Japan, and we said, “OK. What do we do with the 4G? That’s very interesting, very large broadband, excellent response times and everything. What can we do?”

It was very interesting. We could do what we couldn’t do before, which is to put the algorithm on the cloud and to use it on the smartphone, because the smartphone were becoming very smart. It was just beginning of the smartphones at the time, with the iPhone 4S, which was the first one which was really capable of something.

We started to develop Xloudia, which is today one of our lead products. Xloudia is mass recognition of images, products, colors, faces and everything from the cloud, and in 200 milliseconds. It goes super fast, and we search in very large databases. We can have millions of items in the base, and we can find the object or the specific item in 200 milliseconds.

Typically, applying the technology to augmented reality, which was done far before us, we said, “OK. The image recognition can be applied to something which is maybe less fun than the augmented reality, but much more useful, which is the recognition of everything.”

You just point your smartphone to any type of object, or people, or colors, or clothes, or anything, and we recognize it. This can be done with the algorithm, with the image recognition and the video recognition. That’s a key point, but not only with these kind of algorithms.

We need to develop some deep learning recognition algorithm for finding some proximities, some similarities, and to offer the users more capabilities than saying, “Yes, this is it,” or, “No, this is not it.” [laughs]

We focus on this angle, which is, OK. Computer vision is under control. We know our job, but we need to push the R&D into something which is more on the distribution of the search on the network ought to go very fast. That’s the key point. The key point was going super fast, because for the user experience, it’s absolutely momentary.

On the other hand is, “If we don’t find exactly what is searched by the user, how can we find something which is similar or close to what they are looking for?” There is an understanding of the search, which is just far beyond the database that we have in catalog, and just to make some links between the search and the environment of the users.

The other thing that we focus on was actually the user experience. For us, it was absolutely critical that the people don’t press any button for finding something. They just have to use their smartphone, to point it to the object or to the page, or to the clothes, or anything that they want to search, and the search is instantaneous, so there is no other action.

There is no picture to take. There is no capture. There is no sending anything. It’s just capturing in real time from the video flow of the smartphone, directly understanding what is passing in front of the smartphone. That was our focus.

On this end, it implies a lot of processes, I would say, for the synchronization between the smartphone and the cloud. Because you can’t send all the information permanently to the cloud, so there is some protocol to follow in terms of communication. That was our job.

Of course, we don’t send pictures to the cloud because it’s too heavy, too data‑consuming. What we do is making a big chunk of the extractions or of the work on the smartphone, and sending only the necessary data for the search to the cloud.

The data, they can be feature points for the image. They can be a color reference extracted from the image. They could be vectors, or they could be a series of images from a video, for instance, just to make something which is coherent from frame to frame.

That’s Xloudia, super fast image recognition with the smartphone, but cloud‑based, I would say, and the purpose is really to focus on the user experience, to go super fast, and to always find something back [laughs] as a reference.

The target market may be narrower than what we had before with augmented reality, and what we target is to help the e‑commerce, or more specifically, the mobile commerce players to be able to implement the visual search directly into their application.

The problem today that we have even in 2016, the problem is that when you want to buy something on your smartphone, it’s very unpleasant. Even if you go to bigger e‑commerce companies like Amazon and the others, what you have on your smartphone is just a replication of what you can see on the Web, but it’s not optimized to your device. Nobody’s using the camera, or very few are using the camera for search.

The smartphone is not a limited version of the Web, typically. It’s coming with much more power. There is cameras. There are sensors, and many things that you’d find on a smartphone which are not on a traditional PC.

The way we do mobile commerce must be completely different from the traditional e‑commerce. It’s not a downgraded version of the e‑commerce. It must be something different.

Today, we see that 50 percent of the Internet traffic to big brand website is coming from the smartphone. 50 percent, and 30 percent of the e‑commerce is done from mobile.

It means that there is a huge gap between these 50 percent and these 30 percent. There is 20 percent of the visitors who don’t buy on the smartphone because of this lack of confidence or economics or something.

There is something wrong on the road to [laughs] the final basket. They don’t buy with the smartphone, and this smartphone traffic is definitely increasing with time, as well. It’s 50 percent today for some big brands, but it’s increasing globally for everybody.

There are some countries, very critical countries like Indonesia or India, who have a huge population, more than 300 million in Indonesia, one billion people in India. These guys, they go straight from nothing to the latest Samsung S6 or 7.

They don’t go through the PC stage, so they directly buy things from the smartphone, and there’s a huge generation of people who will just buy everything on their smartphone without knowing the PC experience, because there is no ADSN lines because there are so many problems with the PC. It’s too expensive, no space, or whatever.

We target definitely these kind of markets, and we want to serve the e‑commerce or the mobile commerce pioneers, people who really consider that there is something to be done in the mobile industry for improving the user experience.

Henrik:  What are the biggest challenges and successes you’ve seen with image and video recognition?

Nicolas:  If you want to find something which is precise, where everything is fine today, 2016 saw many technologies, algorithms, where you can compare, “OK. Yes, this is a Pepsi bottle, and this is not a Coca‑Cola bottle,” so that’s pretty under control today. There is no big issue with this.

The challenge ‑‑ I would prefer to say war ‑‑ is really understanding the context, so bringing more context than just recognizing a product is, “What is the history? What is the story of the user, the location of the user? If we can’t find, or if we don’t want to find a Pepsi bottle, can we suggest something else, and if yes, what do we suggest?”

It’s more than just tagging things which are similar. It’s just bringing together a lot of sources of information and providing the best answer. It’s far beyond pure computer vision, I would say.

The challenge for the computer vision industry today, I would say, is to merge with other technologies, and the other technologies are machine learning, deep learning, sensor aggregations, and just to be able to merge all these technologies together to offer something which is smarter than previous technologies.

On the pure computer vision technologies, of course, the challenge is to create database or knowledge where we can actually identify that some object are close to what we know, but they are not completely what we know, and little by little, to learn or to build some knowledge based on what is seen or recognized by the computer vision.

One of the still‑existing challenge…It’s a few decades that I am in this industry, but [laughs] there is still a challenge which is remaining, which is actually the, I would call it the background abstraction or the noise abstractions, is, “How can you extract what is very important in the image from what is less important?”

That’s still something which is a challenge for everyone, I guess, is just, “What is the focus? What do you really want? Within a picture, what is important, and what is not important?” That is a key thing, and algorithms are evolving in this domain, but it’s still challenging for many actors, many players in this domain.

Henrik:  As of March of 2016, how do you see image and video recognition changing?

Nicolas:  The directions are speed. Speed is very important for the user experience. It must be fast. It must be seamless for the users.

This is the only way for service adoption. If the service is not smooth, is not swift ‑‑ there is many adjectives for this in English [laughs] ‑‑ but if the experience is not pleasant, it will not be adopted, and then it can die by itself.

The smoothness of the service is absolutely necessary, and the smoothness for the computer vision is coming from the speed of the answer, or the speed of the recognition. It’s even more important to be fast and swift than to be accurate, I think. That’s the key thing.

The other challenge, the other direction for our company is definitely deep learning. Deep learning is something which is taking time, because we must run algorithms on samples on big databases for building an experience, and building something which is growing by itself.

We can’t say that the deep learning for LM3Labs, or for another company, is ready and finished. It’s absolutely not. It’s something which is permanently ongoing.

Every minute, every hour, every day, it’s getting there, because the training is running on, and we learn more to recognize. We improve the recognitions, and we use the deep learning for two purpose at LM3Labs.

One of them is for the speed of recognitions, so it’s the distribution of the search on the cloud. We use deep learning technologies for smartly distributing the search and going fast.

The other one is more computer vision‑focused, which is to, if we don’t find exactly something that the user is trying to recognize, we find something which is close and we can make recommendations.

These recommendations are used for the final users so they can have something at the end, and it’s not just a blank answer. There is something to propose, or it can be used between the customers.

We can assess some trends in the search, and we can provide our customers, or B2B customers, we can provide them with recommendations saying, “OK. This month, we understand that, coming from all our customers, the brand Pepsi‑Cola is going up, for instance, instead of Coca‑Cola.” This is just an example. [laughs] That’s typically the type of application that we use with the deep learning.

Henrik:  What advice would you like to share with people looking at image and video recognition?

Nicolas:  Trust the vision. The vision is very important. There are a lot of players in the computer vision community today.

Some have been acquired recently, like Metaio by Apple, or Vuforia by PTC are two recent examples, and some people are focused on the augmented reality, so really making the visual aspect of things. Some others are more into cloud for the visual search, and just improving the search for law enforcements and these kind of things.

The scope, the spectrum of the market is pretty wide, and there are probably someone who has exactly the same vision than you [laughs] on the market.

On our side, LM3Labs, we are less interested in augmented reality clients, I would say. We are less interested in machine‑to‑machine search because this is not exactly our focus, either.

We are very excited by the future of mobile commerce, and this is where we focus, and our vision is really on this specific market segment. I would say the recommendation is find a partner who is going with you in terms of vision. If your vision is that augmented reality will invade the world, go for a pure player in this domain.

If you have a smart vision for the future of mobile commerce, join us. [laughs] We are here.

Henrik:  Thanks, Nicolas. For more on this, visit Tagging.tech.

Thanks again.


 

For a book about this, visit keywordingnow.com


Leave a comment

Tagging.tech interview with Georgi Kadrev

Tagging.tech presents an audio interview with Georgi Kadrev about image recognition

 

Listen and subscribe to Tagging.tech on Apple PodcastsAudioBoomCastBoxGoogle PlayRadioPublic or TuneIn.

Keywording_Now.jpg

Keywording Now: Practical Advice on using Image Recognition and Keywording Services

Now available

keywordingnow.com

 

Transcript:

 

Henrik de Gyor:  This is Tagging.tech. I’m Henrik de Gyor. Today, I’m speaking with Georgi Kadrev. Georgi, how are you?

Georgi Kadrev:  Hi, Henrik. All good. I am quite enthusiastic to participate in the podcast.

Henrik:  Georgi, who are you and what do you do?

Georgi:  I’m Co‑Founder and CEO of Imagga, which is one of the pretty good platforms for image recognition as a service. We have auto‑tagging and auto‑categorization services that you can use for practical use cases.

Henrik:  Georgi, what are the biggest challenges and successes you’ve seen with image recognition?

Georgi:  In terms of challenges, I think, one of the biggest ones is that we, as human beings, as people, we are used to perceive a lot of our world through our eyes. Basically, when people think in general about image recognition, they have a very diverse and a very complete picture of what it should do.

Let’s say from optical character recognition or recognizing texts, to facial recognition of a particular person, to conceptual tagging, to categorization, all these different kinds of aspects of visual perception.

People typically have expectations that it’s the same technology or the same solution, but actually, quite a lot of different approaches needs to be engaged into the actual process of recognizing and understand the semantics of the image.

In terms of successes, like addressing this, I can say that not surprisingly the deep learning thing that is quite a big hype in the last few years have been a huge success into the more conceptual or class‑level object recognition. This is what it is as a type of object.

Is it a bottle? Is it a dog? Is it a cat? Is it a computer? Is it mobile phone? and so on. This has become pretty practical, and right now we can say that we are close to human level in recognition of a lot of different classes of objects.

At the same time, in some other spaces, like lower recognition, like facial recognition, we also see quite big appreciation rates that allow for practical applications.

I can say one of the good things is that we are more and more closely to automating, at least, part of the tasks that needs to be performed by a computer, replacing the need for manual annotation of photos for different use cases.

In terms of challenges, maybe I would also add that you still need a lot of data, a properly annotated data. In machine learning and in deep learning in general, it’s very data‑greedy, so we need an enormous amount of samples to really make something robust enough and practical enough.

We still see the gathering a high‑quality dataset is one of the challenges. This is something that we also try to internally address. It helps us be more competitive in terms of quality and the technology.

Henrik:  As of March 2016, how do you see image recognition changing?

Georgi:  What we definitely see that there are more and more services. Some of them are pretty good quality that try to automate different aspects of image recognition that I briefly tackled.

We see even big players like Google starting to offer services for some of those things like what they call label recognition or what we call tagging, what they call optical character recognition or most of the vendors call it that way.

We also have seen logo and facial recognition being quite popular and being utilized more and more in different kinds of brand monitoring services.

At the same time, from the perspective of a bit of downside of visual recognition, something that we see when we talk about highly artistic images or some more specific art or other types of specific content, still the technologies needs to be customly‑trained for that.

If possible at all to train a classification‑based image recognition to recognize different kinds of artistic images or different kinds of very specialized image content.

It’s related with what I had mentioned in the beginning, that if you have a specific task, sometimes you need a specific approach. Deep learning to a certain extent has addressed this, but still it’s not like one-size-fits-all solution. We see that in a lot of cases the customers need to define a specific problem so that they can have a very good and precise specific solution.

Henrik:  As of March 2016, how much of image recognition is completed by humans versus machines?

Georgi:  I would say, [laughs] honestly depends on the task. We’ve seen some cases that machines can be better than humans and not just in theory, in practice.

For example, if we train a custom classifier with the human‑curated data set, and then we do some kind of testing or validation, we see that the errors, the things that are reported as errors in the learning process can actually mean errors by the people.

It’s mistaken when it has annotated the photo so that then it’s false reported as an error, although it’s correct. In a way, this is promising because it shows the automation and consistency that machines can do is pretty good in terms of precision.

At the same time, there are tasks where if you have a lot of explicit or implicit knowledge that you need to get in order to resolve an automation task. A lot of background knowledge that people have is not available for the machine and then you need to figure out a way how to either automate this or use a combination between a computer and a human, or you can decide this as a fully humanic task.

Still, it’s not approachable by technical approach. I cannot give an exact number. Something interesting that I can share is a statistic, we did a pretty interesting experiment called Clash of Tags, where we ask people. We have a data set of stock photography. This stock photography has existing set of tags provided by various people like the stock photographers themselves.

Then we also have the same set of images of stock photos that are annotated using current technology, completely blindly from the original tags that people have put for the image. Then, we do this thing, we ask people, “Type a keyword and then you get search results.”

One of the set of results on the left‑hand side or the right‑hand side is not known in advance, but one of the set of results is based on the tags that people have put, and the other set of results is based on the tags that our API has generated and has been assigned to the images.

The user needs to pick which is the winning set. In a lot of cases, I can say in 45 percent roughly of the cases, people have chosen that result set based on automatically generated tag is better than the set of results based on human‑provided tags. It’s not more than 50, but still means in a lot of cases the machine has been superior to the human performance.

I believe this number will grow in the future. I can say it’s still a way to go to something like complete automation, but we are getting closer and closer and we’re enthusiastic about it.

Henrik:  Georgi, what advice would you like to share with people looking into image recognition?

Georgi:  I would say, have a very clear idea of what kind of venue you want to drive out of that and try to optimize for that. Either working on it yourself or with a vendor. Make it really clear what are your objectives, what are your objections about image recognition. Just think from the practical aspect.

This is something that me, personally and the whole of our team has always been stressing on. Let’s see what it does and what it can do and what it can’t and address. If they’re really a pain that can be solved right now or not. Also from the vendor side, I would suggest don’t over‑promise because it’s quite easy to get people a bit confused.

They have an expectation like, “It’s AI so it can do anything?”, but you need to be realistic, so you save your time and you save your potential customer time. If the use case is very clear and if he was a professional then commit that this is going to work out, then go for it. Other than that, don’t waste time, yours and your potential customers.

This is something that we saw a lot, because a lot of people ask about features that currently technically are not practical enough or they ask about features that we don’t have. We learn the hard way and to certain extent to say, “This is possible, this is not possible currently from our perspective.”

Henrik:  Where can we find more information about image recognition?

Georgi:  Depending on what you need. Do you need more data for training, or do you need more basic knowledge, or do you need different kind of inspirations about business applications? There are different sources.

Obviously, ImageNet and all the accompanying information and the algorithms that we have around this pretty nice dataset is quite useful for researchers. We also have for beginners in image recognition, we have all these set of Coursera courses.

One of the most notable one from Stanford University. A few more pretty good ones from most of the top European or American universities. We have different kinds like newsletters and digests. AI Weekly is pretty good inspirational wise. There is some mixture of research topics, business cases, cool hacks and ideas about what you can do with image recognition.

Henrik:  Well, thanks, Georgi.

Georgi:  Thanks a lot, Henrik. I hope your audience will enjoy the podcast, including our participation in it.

Henrik:  For more on this, visit Tagging.tech.

Thanks again.


 

For a book about this, visit keywordingnow.com