Google

Google Pixel 4 To Get AI Zoom Capability

By  | 

Google Pixel 4 is the latest one from the Pixel family by the tech giant Google. The pixel 4 smartphone demonstrates how (AI) artificial intelligence and software can enhance a camera’s capabilities. The camera, as we know it, has been one of the most important selling points of any mobile device.

Google Pixel 4, the latest model in a smartphone line defined by its cameras, touts an enhanced zoom capability when shooting photos as its biggest upgrade. But the Alphabet company is doing it differently than Samsung Electronics, Huawei Technologies, or Apple for that matter. Instead of adding 3-4 cameras with complicated optics system, Google went for a single extra lens that relies on AI and processing to fill in the quality gap. Intelligent, isn’t it?

From the Manufacturer’s Own Words

In place of the usual spec barrage, Google prefers to talk about a “software-defined camera,”. Isaac Reynolds, product manager on the company’s Pixel team, said in an interview. The device should be judged by the end-product, he argued, which Google claims is a 3x digital zoom that matches the quality of optical zoom from multi-lens arrays. The Pixel 4 has two lenses with a magnification factor between them that’s less than 2x. The tech that extends that useful range is almost entirely software.

New Google Pixel 4 Smartphone

The success of the Pixel’s camera is instrumental to Google’s broader ambitions. Since it drives Google Photos adoption, provides more fodder for Google’s image libraries, and helps create better experiences with augmented-reality applications. The current new on-screen walking directions in Google Maps being one.

More on Google Pixel 4 Camera Details

Super Res Zoom, a feature Google launched last year, uses the slight hand movements of a photographer when capturing a shot. This is a hurdle to creating crisp images – as an advantage in crafting an image that’s sharper than it otherwise would be. The camera shoots a burst of quick takes, each one from a slightly different position because of the camera shake, then combines them into a single image. It’s an algorithmic trick that lets Google collect more information from imaging hardware.

AI to Camera’s Effect is the New Trend

To augment its reliance on AI and machine-learning tasks, Google has designed and added its own Pixel Neural Core chip for the Pixel 4 lineup. It accelerates the machine-learning speed of the device. It’s intention being to differentiate Google’s offering from other Android smartphones on the market with a Qualcomm Snapdragon processor at its core.

The other major tool in Google’s AI kit is called RAISR, or Rapid and Accurate Image Super Resolution. It trains AI on vast libraries of images so it can more effectively enhance the resolution of images. The system can recognise particular patterns, edges and visual features, so that when it detects them in lower-quality shots, it knows how to improve them. That’s key to creating zoom with “a lot smoother quality degradation,” as Reynolds put it. With more than a billion Google Photos users, the US company has a massive supply of images to train its software on.

Other Pixel 4 Features

Among the other features that Google offers with the Pixel 4 is the ability to identify the faces of people. This is in a way similar to Apple’s iPhones. Familiar faces gets priority on new captures. The camera focuses on them and gives extra care. That use of software technology has defined Google’s devices to date. Facebook, Amazon.com and Apple aim to employ their own AI systems too.

An avid gamer, tech and gadget enthusiast, I love to try out new games and tech features. Gadgets are my daily refreshment and constant companion.

Leave a Reply

Your email address will not be published. Required fields are marked *