Computer Vision: A World Changing Technology
In 1982 I was invited to visit the University of Nebraska Lincoln Agricultural School to discuss a new network they were in the process of building which they called Agrinet. I was at the time the upper Midwest sales representative for GTE Telenet and I was located in Edina, Minnesota. It was the vision of the U Nebraska Lincoln to help Nebraska’s farmers get an accelerated start into the 21st century. GTE Telenet was the first company to build a commercial version of ARPANET’s CCITT X.25 packet switching network.
This meant that farmers would be able to dial local modems connected over high speed lines in to the University of Nebraska Lincoln’s computer systems and automate agricultural functions such as planning field layouts, crop schedules, and trading commerce. My mother had grown up as a farm girl with 11 brothers and sisters in Kendall Wisconsin. To me, it was thrilling to be able to bridge my mother’s childhood with the future. I happily worked with them to get their network going.
We have now moved into the 21st century which incorporates the technologies of the Internet, Cloud Computing, and Artificial Intelligence (AI) and farmers have begun to utilize all three of them.
I am going to introduce two of the leading companies that are helping agriculture bloom. They are Sentera in Minneapolis, Minnesota, CropTracker in Kingston, Ontario, along with a group of automation equipment companies that give these two a reason for being.
Crop Management Systems
Sentera designs and manufactures software and sensor tools “that crop advisors, consultants, agronomists, and other e.g. professionals can use to help growers achieve the best possible outcomes for their operations.”
Sentera FieldAgent Web
FieldAgent is Sentera’s nexus of their product line of mobile, web, and cloud-based software, sensors, and drones.
FieldAgent Mobile is an on the ground scudding tool together and utilize real-time imagery and analytics. Its strength is its ease and solutions using drones equipped with Sentera and third-party sensors.
FieldAgent Web provides tablet access to crop health data and analytics. It is used to share and act on seasonal insights.
FieldAgent Notifications leverage whether, satellite, and human observations along with Joan data to increase accuracy and efficiency for management decisions.
LiveNDVI is video technology that when used with Double 4K and AGX710 sensors you can gather real-time data for decision-making.
FieldAgent APIs integrate with connected applications and platforms including John Deere Operations Center, Climate FieldView, and Beck’s Farmserver.
Croptracker Farm Management Software
CropTracker is another comprehensive agriculture management software system that is a top leader in its group. It helps farmers maintain accurate records, enhance traceability, and manage labor and production costs for planting through shipping.
Its features include produce packing traceability records, harvest field packing, production practice tracking, work productivity and labor tracking, shifting traceability records, receiving records, agricultural irrigation records, storage records, analytics and reports, and audits.
Like Sentera, CropTracker integrates with autonomous vehicles and drones. What makes this integration powerful is Computer Vision? Besides manual pre-programming Artificial Intelligence (AI) is rapidly finding its way into these kinds of products. Its costs are coming down dramatically to the point where midsize family farms can compete with corporate farms and feed the world.
In normal vehicles the driver detects hazards, navigates, and avoids objects. In autonomous vehicles there is no driver and no remote control. Thus, just programming a path with GPS is not enough. In an autonomous vehicle such as a tractor, it must be able to see, plan a route, understand images of the area, and be able to adjust its path in real-time to avoid damage.
For versatility and mobility, drones rank in the top group. They can carry different kinds of sensors and cameras and rapidly cover many miles of farm fields as well as other terrains.
As Laura Barrera says in her article AGCO – What is Driving the Driverless Momentum in the Ag Equipment Industry “Ask someone in agriculture what comes to mind when they think of autonomy and they’ll likely say a driverless tractor.”
Kraig Schulz CEO of Autonomous Tractor Corporation (ATC) says that in a corn – soybean operation having the right people on hand at the right time is a case for semiautonomous equipment. I a semiautonomous environment, on man could run a fleet of robots simultaneously. ATC is located just northwest of Minneapolis in St. Michael, MN.
AGCO has planting robots that can do what Kraig Schulz talked about. AGCO is located in Duluth, GA.
An area of interest for having multiple smaller robots is compaction of the soil. When soil is highly compacted it makes it harder to seed, water, and grow crops.
At this point, let me make full disclosure. I live only 35 miles away from Autonomous Tractor Corporation Headquarters and I have a natural bias toward supporting local businesses.
Most experts agree that the move in autonomous and semiautonomous agriculture is toward electric motors. According to Schultz a 200 hp tractor is run for 10 hours would require 1500 kWh. That would cost $350,000 in lithium batteries and will way more than the tractor itself. He thinks that the better option for agriculture is to couple a generator to the engine and use electric wheel motors without batteries to power the tractor.
ATC eDrive Technology Explained
Until recently, farmers, realtors, agronomists, scientists, insurance companies, and researchers have walked to the fields in order to observe and understand the land and its crops. This is been a time-consuming and labor-intensive, as well as often backbreaking process to analyze hundreds of acres of crops.
In the last few years, the cost of unmanned aerial vehicles (UAVs) were commonly known as drones have come down dramatically putting them in reach of the family farm.
RMAX Drone by Yamaha
Today’s drones offer greater payloads, longer flight times, and the ability to fly in varying conditions including subzero temperatures. There is a sizable variety of drones available for agriculture so your particular needs should determine what kind of drone you get.
Some of the questions you should be asking yourself are:
- will you be mapping a small area or hundreds of acres
- what kind of training we be flying
- the kind of environment you farm dry, wet, cold, windy
- fixed wing, multirotor, vertical takeoff and landing
- what kind of playtime leave using and what kind of sensor devices will you attach
- the range and speed you will be flying
You may need more than one kind of UAV. The drones in the last paragraph were oriented more towards photography and sensing, but you may want to have drones that work better for soil sampling, water testing, and seating.
PrecisionHawk DJI Matrice 200 v2
According to PresisionHawk: “The M200 is a “workhorse” drone, perfect for repeated use in the toughest farming environments. It flies in sub-zero temperatures and high winds. Also, for farms or ranches in challenging terrain, the M200 features a complement of safety features, such as obstacle avoidance sensors and the DJI Airsense ADS-B receiver, helpful in detecting cooperative aircraft within your flight area. Use the dual gimbal to deploy two payloads, such as visual and thermal sensors.”
$8,499 – $11,749
Sentera PHX Complete System
According to Sentera: “Sentera’s PHX™ is a highly-reliable, easy-to-use, hand-launchable fixed-wing drone that gives you the ability to view live HD video and capture a wide array of analytic data, including stand count, weed detection, and plant health using the newest variants of the Sentera Double 4K sensor payload.
Leveraging a new long-range omnidirectional communication link, the PHX has a reach of 2+ miles, quickly scouting large fields with precision and is ideal for experienced pilots. The PHX is the best-performing and best-valued professional fixed-wing drone available today.
The PHX is hot-swappable, accepting several different sensor types – including ultra-precise RTK GPS Double 4K sensor payloads.
- Deep Analytics: Purpose-build payload options allow you to perform stand count, weed detection, and plant health analytics. PHX is the only fixed-wing drone that maps stand counts and weed locations early in the growing season when action matters. Sentera’s FieldAgent™ data analytics software allows PHX users to create accurate maps and make decisions at the field edge without an internet connection.
- Long-range: By using an omnidirectional, industrial-grade communications link, the PHX is designed to fly up to two miles away or more. This is helpful for users with BVLOS permissions, and ideal for the average user who stays within the typical one-mile VLOS range.
- Fast Setup: A small, lightweight, plug-and-play communication box and streamlined pre-flight checklist enable you to easily go from setup to launch quickly and easily.
- Multiple Payload Options: The 2019 PHX accepts the newest versions of Sentera Double 4K sensors, allowing you to capture more crop health data and gain deeper insights. Ultra-precise RTK GPS is an available option for all PHX Double 4K sensor payloads.
With a cruise speed of 35mph and up to a 59-minute endurance, the PHX is capable of covering broad areas that other drones of the same size cannot. In a single flight, the PHX can collect data from 700 acres.
Leverage the PHX RTK variant to achieve sub-5cm and better accuracy on orthomosaics and 3D mapping and modeling products. Achieve this without the need for surveyed ground control points or time-consuming post-processing of the GPS data. An ideal solution for survey, agriculture, and mapping customers who need to reliably – and quickly – capture precision data, the dual-band RTK payload delivers true L1/L2, multi-constellation RTK to the PHX. Sentera’s latest offering shatters the cost barrier for industrial-grade, dual-frequency RTK-enabled image collection.”
One of the things that we as human beings easily do is see things. It is called “vision”. It is one of our greatest capabilities. It lets us know what is happening in our three dimensional world and lets us know when it’s happening and where it’s happening. Sight is truly a gift from God.
When we are hungry we can see where the food is. If there is danger we see it even if it is in our periphery. When we see a pretty woman we are delighted. In order to see we have eyes and we know that our eyes have retinas and pupils and optical nerves that connect directly with the brain. And we know that images come through our eyes, through the pupils and are directed to the back of our eyes where the optical nerves pick up the light waves and transmit them to the brain where they are interpreted.
Machines do not have the advantage of the human brain. Through machine learning they learn to mimic the brain with a fair amount of accuracy. Continuous training with more sessions and larger databases can begin to make systems seem as though they are seeing things and interpreting them correctly.
Dr. Larry Roberts is considered the Father of Computer Vision and the Father of The Internet. I was fortunate to work for Dr. Roberts when he was president of GTE Telenet as Telenet’s primary salesman in the upper Midwest. Dr. Roberts had been with ARPA and was the brain behind the CCITT X.25 packet switching protocol which evolved into TCP/IP.
In fact, Computer Vision and the Internet work well together. Users are most often remote from the computers processing Computer Vision. Rapidly, Artificial Intelligence is closely integrated with both of these domains.
In 1966, two of the early pioneers of artificial intelligence, Seymour Papert and Marvin Minsky launched a program called the Summer Vision Project, with the goal of creating a computer system that could identify objects in images. While the project did not pan out initially it laid the seeds in foundation you search by Kunihiko Fukushima and Yan LeCun to begin the Deep-Learning Revolution. Their work beginning in the late 1970s and into the 1980s and later began to be used by the U.S. Postal Service and bank corporations an envelope cheque reading.
Segments where Computer Vision is active:
- Retail and retail security
- Military Applications
- Automation and Robotics
- Research and Development
Retail and Security
One of the newest technologies, which I have yet to see, is cashierless stores. They have actually been around since 2016 initiated by Amazon and their Go stores. They are also known in some areas as Grab and Go.
The technology is based upon a combination of Computer Vision and AI. With hundreds of cameras placed throughout the store checking the customer and knowing the placement of every product system is able to keep track of every product every customer that has that product. Subsequently, the customer leaves the store his or her account has been charged. I have not seen any reports or statistics regarding people attempting to shoplift products.
NCR has acquired a company, StopLift Checkout Vision Systems, which develops application called ScanItAll is said to have the capability to detect “Sweethearting” which in industry terms means cashier’s away the merchandise without charge to a customer by doing fake scan or fake ringing up of merchandise. According to NCR have been more than 4 million occasions Sweethearting.
Robotic Inventory Taking
You may have already seen robots going down the aisle of your favorite grocery store or Target Store and seeing robots going down the aisle taking inventory what’s on the shelves.
2020 Tesla Model S
According to Tesla “All new Tesla cars come standard with advanced hardware capable of providing Autopilot features today, in full self-driving capabilities in the future– through software updates designed to improve functionality over time.”
Some of the features include:
Narrow Forward Camera
Main Forward Camera
White Forward Camera
Forward-Looking Side Cameras
Rearward Looking Side Cameras
Having listened to the Amazon Audible book, “Elon Musk” by Ashley Vance, I’m pretty certain that Tesla has been ready for an autonomous vehicle for more than two years.
Ford Motor Company
Ford has a subsidiary, Quantum Signal AI, that is helping Ford develop core strengths in Computer Vision, AI, robotics, and simulation in order to develop autonomous vehicles. According to industry experts quantum plans to launch a commercial self-driving service sometime in 2021.
Additionally Ford Motor invested $1 billion in Argo AI which is helping Ford develop its self-driving systems.
In July 2018 and updated in August 2018, the Detroit Free Press published an article “How General Motors is leading the race for self-driving cars”, in which they counted GM to be way ahead of any of the competition even had a Graph showing Tesla to not even be in the game. Two weeks later Detroit Free Press published another article stating that “GM says consumers can do ride sharing next year in self-driving cars.”
The car they are talking about is the Cruise which is the result of the joint venture that General Motors participated in beginning in 2016. The Cruise was originally Chevrolet Bolt. Progress and news hype are two different things. Current discussion and articles say that the Cruise AV will be available sometime in 2021
I have very little doubt General Motors will come out with a good working product with self driving automobiles. It just is taking longer than predicted.
Fiat Chrysler, BMW, Intel, Mobileye, Google, (and the Planet Pluto?) have all been collaborating with each other At one time or another since 2016 to bring solutions for fully automated driving into production by 2021 according to the FIAT CHRYSLER AUTOMOBILES (FCA) website.
According to healthimages.com 70 million MRIs and CT Scans are performed every year. That means that every minute of every hour 133 new scans are being done. Our healthcare system is dependent on images. In addition to that, doctors are dependent on patient’s descriptions of their symptoms thus forcing doctors spend more time triaging and asking more background information. Natural Language Processing (NLP), Computer Vision, and Artificial Intelligence (AI) are rapidly becoming necessary tools required to assist our doctors and other healthcare professionals.
Mayo Clinic CIO Chris Ross at Health 2.0
Chris Ross Mayo Clinic Chief Information Officer, where I go to every year because of lifesaving surgeries I’ve had, said at the Health 2.0 Conference last September, “This artificial intelligence stuff is real and it is coming quickly to the care setting near you.”
In 2018, Mayo completed a huge four-year, 90 hospital, $1.5 billion Epic integrated hospital system. In January 2020 they announced their Clinical Data Analytics Platform as their first venture under the Mayo Clinic Platform through understanding and knowledge derived by data.
Mayo Clinic has been leading the medical community for 150 years and you can expect other leading organizations such as Johns Hopkins and Cleveland Clinic to follow suit.