Neural Pocket Inc. (4056)
Share Price ¥4,235 | Market Cap: $550m | EV/Rev: 68x
“In a future smart city run entirely by artificial intelligence, chances are you will walk into a bar for the first time and the bartender will already know your favorite drink”.
Headquartered in Tokyo, Neural Pocket provides end-to-end smart city AI solutions for large companies and government entities in Japan and beyond.
What’s a smart city?
A smart city is a city that uses technology to provide services and solve city problems. A smart city does things like improve transportation and accessibility, improves social services, promotes sustainability and provides citizens with a better quality of life. Simple examples include smart streetlights, lighting that adapts to the activity on the street; or parking sensors providing real time information on an app to find vacant lots.
What does Neural Pocket do?
Neural Pocket uses AI technology, specializing in image and video recognition, to solve problems in three main areas: mobility, infrastructure, and security.
Here are a few examples:
Mobility: Neural Pocket has developed the world’s first AI-equipped drive recorder “Smart-kun”, which can be downloaded as a free app. The recorder can alert the driver of danger by detecting objects with AI.
Infrastructure: Using computer vision, Neural Pocket has developed an algorithm for smart factories to analyze cardboard boxes. The simulation needs to account for any condition they might be in, such as dented or damaged, as well as having labels and logos on their surfaces.
Security: one of Neural Pocket’s partners wanted to provide a safer environment at one of its offices. It commissioned Neural Pocket to develop a computer vision model for its smart security camera system to detect dangerous objects.
What’s the tech edge?
Edge processing technology that executes AI analysis inside the camera terminal. In real time, the camera's video is AI-processed, converted to abstract data, and then visualized and saved/analyzed in a database. The advantages of Edge AI are lower costs, low latency, privacy, and lower electricity consumption (90% lower power consumption).
A wide variety of AI libraries for object/person recognition. A wide variety of services can be quickly supported through person and behavior recognition, object detection, and flow analysis. Artificial intelligence can be applied to existing cameras to improve 3D object size measurement, estimate self-position, and improve recognition of blurred images.
“This one is a little interesting. We don't often explain technology in our earnings announcements, but we are keen on these technologies” CEO
NEURAL POCKET + UNITY
The manual process of training computer vision models up to production-level quality requires collecting and annotating large amounts of real-world data, which takes a lot of time and resources. By using Unity's real-time 3D development platform and its Computer Vision Perception Package, Neural Pocket overcomes these challenges for computer vision tasks.
The challenge is to produce large amounts of data with perfect annotations quickly and efficiently for computer vision model training.
Reduced training time for computer vision models by 95% (one week compared to six months on average)
A 95% reduction in data collection, annotation, and training costs ($6,750 vs. $150,000 on average)
Improved computer vision model performance
Capacity to take on multiple projects and drive revenue growth
By leveraging Unity and the Unity Computer Vision Perception Package, the Neural Pocket team has significantly reduced the number of iterations and associated costs for training computer vision models while improving the model's overall performance. Synthetic images are generated from Unity's tools and automatically annotated and labeled for iterative training of computer vision models on simulated and real data.
Neural Pocket staged a video shoot and filmed participants holding weapons in a variety of easy- to difficult-to-detect orientations as part of a computer vision training exercise. Afterwards, Neutral Pocket generated approximately 1,000 still images from the videos and manually labelled each one with the weapons identified by its internal team of annotators.
Based on these data, for its first training cycle, the detection rate was 27% for knives and 80% for bats and guns. After using Unity Computer Vision to simulate 5,100 frames, five times more pictures were generated than the originals. The detection rate for knives improved from 27% to 87% after training the model on synthetic and real data, while that for bats and guns improved from 75% to 100%.
In the future, Neural Pocket plans to reduce its reliance on real-world data and use synthetic data for as much as 90% of training data. The team’s confidence in Unity’s computer vision capabilities is a key factor driving this shift.
LTM Net sales ¥885m
MRQ sales growth 76% Operating margin: 30%
Cash and deposits: ¥1,455m; debt ¥563m
Granted 12 domestic patents (plus 10 pending). Overseas, 4 patents in preparation.
Future Strategy: The goal is to move from a project/fee-based revenue model to a “unit-based” service. The idea is to make the services more generalized and easier to use, in order to sell more services off the shelf. Neural Pocket will expand its partner network by promoting joint business development and co-creation projects.