Summer vision Project
One of the things that we as human beings easily do is see things. It is called “vision”. It is one of our greatest capabilities. It lets us know what is happening in our three dimensional world and lets us know when it’s happening and where it’s happening. Sight is truly a gift from God.
When we are hungry we can see where the food is. If there is danger we see it even if it is in our periphery. When we see a pretty woman we are delighted. In order to see we have eyes and we know that our eyes have retinas and pupils and optical nerves that connect directly with the brain. And we know that images come through our eyes, through the pupils and are directed to the back of our eyes where the optical nerves pick up the light waves and transmit them to the brain where they are interpreted.
Machines do not have the advantage of the human brain. Through machine learning they learn to mimic the brain with a fair amount of accuracy. Continuous training with more sessions and larger databases can begin to make systems seem as though they are seeing things and interpreting them correctly.
Dr. Larry Roberts is considered the Father of Computer Vision and the Father of The Internet. I was fortunate to work for Dr. Roberts when he was president of GTE Telenet as Telenet’s primary salesman in the upper Midwest. Dr. Roberts had been with ARPA and was the brain behind the CCITT X.25 packet switching protocol which evolved into TCP/IP.
In fact, Computer Vision and the Internet work well together. Users are most often remote from the computers processing Computer Vision. Rapidly, Artificial Intelligence is closely integrated with both of these domains.
In 1966, two of the early pioneers of artificial intelligence, Seymour Papert and Marvin Minsky launched a program called the Summer Vision Project, with the goal of creating a computer system that could identify objects in images. While the project did not pan out initially it laid the seeds in foundation you search by Kunihiko Fukushima and Yan LeCun to begin the Deep-Learning Revolution. Their work beginning in the late 1970s and into the 1980s and later began to be used by the U.S. Postal Service and bank corporations an envelope cheque reading.
Segments where Computer Vision is active:
- Retail and retail security
- Military Applications
- Automation and Robotics
- Research and Development
Retail and Security
One of the newest technologies, which I have yet to see, is cashierless stores. They have actually been around since 2016 initiated by Amazon and their Go stores. They are also known in some areas as Grab and Go.
The technology is based upon a combination of Computer Vision and AI. With hundreds of cameras placed throughout the store checking the customer and knowing the placement of every product system is able to keep track of every product every customer that has that product. Subsequently, the customer leaves the store his or her account has been charged. I have not seen any reports or statistics regarding people attempting to shoplift products.
NCR has acquired a company, StopLift Checkout Vision Systems, which develops application called ScanItAll is said to have the capability to detect “Sweethearting” which in industry terms means cashier’s away the merchandise without charge to a customer by doing fake scan or fake ringing up of merchandise. According to NCR have been more than 4 million occasions Sweethearting.
Robotic Inventory Taking
You may have already seen robots going down the aisle of your favorite grocery store or Target Store and seeing robots going down the aisle taking inventory what’s on the shelves.
2020 Tesla Model S
According to Tesla “All new Tesla cars come standard with advanced hardware capable of providing Autopilot features today, in full self-driving capabilities in the future– through software updates designed to improve functionality over time.”
Some of the features include:
Narrow Forward Camera
Main Forward Camera
White Forward Camera
Forward-Looking Side Cameras
Rearward Looking Side Cameras
Having listened to the Amazon Audible book, “Elon Musk” by Ashley Vance, I’m pretty certain that Tesla has been ready for an autonomous vehicle for more than two years.
Ford Motor Company
Ford has a subsidiary, Quantum Signal AI, that is helping Ford develop core strengths in Computer Vision, AI, robotics, and simulation in order to develop autonomous vehicles. According to industry experts quantum plans to launch a commercial self-driving service sometime in 2021.
Additionally Ford Motor invested $1 billion in Argo AI which is helping Ford develop its self-driving systems.
In July 2018 and updated in August 2018, the Detroit Free Press published an article “How General Motors is leading the race for self-driving cars”, in which they counted GM to be way ahead of any of the competition even had a Graph showing Tesla to not even be in the game. Two weeks later Detroit Free Press published another article stating that “GM says consumers can do ride sharing next year in self-driving cars.”
The car they are talking about is the Cruise which is the result of the joint venture that General Motors participated in beginning in 2016. The Cruise was originally Chevrolet Bolt. Progress and news hype are two different things. Current discussion and articles say that the Cruise AV will be available sometime in 2021
I have very little doubt General Motors will come out with a good working product with self driving automobiles. It just is taking longer than predicted.
Fiat Chrysler, BMW, Intel, Mobileye, Google, (and the Planet Pluto?) have all been collaborating with each other At one time or another since 2016 to bring solutions for fully automated driving into production by 2021 according to the FIAT CHRYSLER AUTOMOBILES (FCA) website.
According to healthimages.com 70 million MRIs and CT Scans are performed every year. That means that every minute of every hour 133 new scans are being done. Our healthcare system is dependent on images. In addition to that, doctors are dependent on patient’s descriptions of their symptoms thus forcing doctors spend more time triaging and asking more background information. Natural Language Processing (NLP), Computer Vision, and Artificial Intelligence (AI) are rapidly becoming necessary tools required to assist our doctors and other healthcare professionals.
Mayo Clinic CIO Chris Ross at Health 2.0
Chris Ross Mayo Clinic Chief Information Officer, where I go to every year because of lifesaving surgeries I’ve had, said at the Health 2.0 Conference last September, “This artificial intelligence stuff is real and it is coming quickly to the care setting near you.”
In 2018, Mayo completed a huge four-year, 90 hospital, $1.5 billion Epic integrated hospital system. In January 2020 they announced their Clinical Data Analytics Platform as their first venture under the Mayo Clinic Platform through understanding and knowledge derived by data.
Mayo Clinic has been leading the medical community for 150 years and you can expect other leading organizations such as Johns Hopkins and Cleveland Clinic to follow suit.