Google launched the open beta for its Cloud Vision service on Thursday, giving developers a new way to make intelligent apps that use images.
Using Google Cloud Vision, developers can manipulate images in several ways, such as running optical character recognition to pull text out of images, or using the technology that powers Google’s SafeSearch feature to detect inappropriate images. Google launched the service in private beta last year, and it is now available for public consumption.
In addition to making the service publicly available, Google also revealed the pricing. Developers will be able to run up to 1,000 images through Google services for free, and then pay a flat fee for each group of 1,000 images they upload after that. Developers will get discounts for sending large volumes of pictures through the service.
However, users will be able to send a maximum of 20 million images a month through Google Cloud Vision during the open beta period, so companies with large-scale production workloads will likely want to reserve its use for low-volume applications.
The tools are key for people hoping to build intelligent applications that handle images, without spending the money to develop image recognition capabilities in house. Instead of spending money on machine learning experts, companies can pay Google to make their applications smarter.
For example, the inappropriate content detection feature is being used by PhotoFy, a startup that lets users edit images with branded content from marketing partners. Google Cloud Vision allows the small startup to make sure users aren’t putting its partners’ logos on top of violent or sexual content.
This beta launch is part of an overall market shift toward providing developers with tools to make their applications more intelligent, without requiring them to do in-house heavy lifting. Microsoft offers similar features through its Project Oxford tools, some of which are available commercially through Azure’s Cortana Analytics Suite.