Today, a new entry in Apple’s Machine Learning Journal was published, where face detection and related Vision framework were discussed for the developers to use for apps in macOS, iOS, and tvOS.
The entry is titled “An On-device Deep Neural Network for Face Detection,” and it explores the barriers against Vision to work. It also covers privacy maintenance ‘by running detection’ locally, and not via cloud servers.
An excerpt from the paper by Computer Vision Machine Learning Team of Apple reads, “The deep-learning models need to be shipped as part of the operating system, taking up valuable NAND storage space. They also need to be loaded into RAM and require significant computational time on the GPU and/or CPU.”
The team further supports on-device computation over cloud-based services, during the process of system resource share, while other apps are running. At the same time, the team writes about a high-level efficiency requirement for the computation for processing huge Photos library in – short time, low thermal increase, and low power usage.
For breaking the barriers, Apple optimized the framework to ‘fully leverage’ CPUs and GPUs with the help of BNNS (Basic Neural Network Subroutines) and Metal graphics of the brand. It optimized the memory usage as well for image loading and caching and network inference.
It is recently noted that Apple is heavily investing in machine learning. They came up with executing a ‘Neural Engine,’ dedicated for the A11 Bionic processor in iPhone X and 8. Furthermore, CEO Tim Cook said earlier this year that machine learning is an indispensable asset for the self-driving car platform of Apple which presently in testing on the roads of California.
How do you like the new developments in Apple’s contributions towards machine learning? Do you think you could have anything to add to the paper? State your opinions in comments and stay with us for more tech updates.