Smart Dust and Other Wild Tech Ideas That Could Become Major Breakthroughs in the Next Decade

The latest wave of technologies propose some wild future scenarios where the financial payback is still far from sure

Capitalizing on emerging technology is crucial for successful companies if they want to avoid losing the consumer and market share. But there also is a fine line between tech hype and reality — many of the buzziest ideas end up as busts and far fewer become major breakthroughs.

Some of the greatest recent advances in tech are already paying off hugely for companies — and in many cases occurring without consumers being aware of their power. Take machine learning, once an out-there idea but now the norm. At online shopping platform Etsy, use of machine learning has transformed its search function — for more than 60 million goods — and in 2017 and 2018, “unlocked” $260 million in gross merchandise sales, according to Etsy chief technology officer and CNBC Technology Executive Council member Mike Fisher.

He explained that machine learning re-ranks search results in real time based on dozens of features, including how a buyers interact with the items. “We know that the first page of search results generate more than 80% of search purchases, so getting it right is critically important for Etsy’s business,” he said.

The latest wave of technologies propose some wild future scenarios where the financial payback is still far from sure: smart sensors the size of dust; exoskeletons that can make man more robotic; 3-D printed objects that can transform themselves into 4-D shapes without any human help. Which are likely to be commercially viable in the next decade?

CNBC recently asked the Technology Executive Council — comprised of 95 leading executives from the corporate, nonprofit and government sectors — to weigh in as part of our first Tech Council survey. The survey was conducted from May 21 through June 8, 2019.

While not an exact ranking, the ideas listed below are organized based on the responses, starting with the least viable big technology ideas to those that stand the best chance of breaking through.

Smart dust

What is it?
Smartwater and Smart Rope have proven businesses can make just about any inanimate object sexy by adding “smart” to the name.

Smart dust is a collection of sensors that can gather data from an environment and wirelessly transmit it back to the cloud, all packaged within particles the size of a grain of sand. These particles can sense anything from light to vibration to humidity. Companies like Analog Devices and Jeeva Wireless have been working hard to perfect the technology, but it has failed to get much buy-in from big tech to date.

Why it could be important
With the rise of the Internet of Things, companies are constantly seeking better ways to gather data on their customers in order to improve their services and better understand their customers’ values. The ability to create an entire network of dust-sized sensors constantly transmitting data to the cloud is a gigantic leap forward in achieving this goal. From covering your engine with smart dust to diagnose car problems to spreading smart dust over a farm field to know when crops need watering, the technology has a wide range of applications. Some health start-ups already have launched sensors that are the size of a grain of sand and that can be ingested. Researchers at University of California at Berkeley even pitched implanting neural dust inside a person’s skull to monitor brain activity.

Challenges
The idea of microscopic computers floating around the air every waking moment is a privacy nightmare only Mark Zuckerberg could get excited about. With lawmakers currently working to strengthen American data privacy laws, advocating for data sensors small enough to be inhaled is probably not a current priority for Big Tech companies. And for anyone that’s had the lovely experience of cleaning up a glitter-covered floor, you know controlling dust-sized particles once they’ve been deployed is no easy task. Smart dust’s greatest advantage – its size – might also be its greatest obstacle to overcome.

Exoskeleton

What is it?
This is actually as cool as it sounds.

Exoskeletons are wearable robots that extend the abilities of the human body beyond its usual limits. They are built to mirror the body of the operator and amplify its abilities, theoretically allowing construction workers to someday lift steel beams they could hardly budge on their own. More common today are “passive exoskeletons” which are unpowered and instead just provide support to the human body as it works. Car manufacturers like Hyundai and BMW both use this variety to help reduce the stress workers doing repetitive tasks put on their bodies.

Why it could be important
A technology that both increases productivity and prevents worker injury is one the industrial sector is sure to eat right up. Not only this, but there are strong physical rehabilitation and commercial possibilities as well. The technology already has shown promise as a breakthrough for disabled individuals.

Challenges
The exoskeleton could end up being a transitional device. As AI and automation continue to improve, there may come a time where humans — even robotically enhanced humans — are not needed for manual labor. If there is no need for humans, there is definitely no need for exoskeletons, making it possible they become obsolete before even reaching the mainstream. If this happens, however, there is bound to be a thriving secondary market of middle-aged men looking to annoy their wives with yet another new toy to bring home.

4-D printing

What is it?
A shoe, called Futurecraft 4-D, was made by Adidas in 2017 with a 3-D-printed sole. Now imagine the whole shoe was 3-D-printed and arrived flat in an envelope, and once taken out an exposed to light, rearranged itself into a sneaker shape. 4-D printing is 3-D printing that can transform shapes after it has been printed.

Imagine a 3-D-printed flower that blooms when it detects light or to use another show example, 3-D-printed shoes that become cowboy boots when they hear “Old Town Road.”

MIT assistant professor Skyler Tibbits is credited for having pioneered the field and is currently working with software company Autodesk to make 4-D printing more realistic.

Why it may be important
There is a wide range of applications for objects that can contract or expand depending on the environment. Imagine a furniture company that could ship a kitchen chair completely flat and have it fully assemble itself the moment it is taken out of the box. Products like this could save consumers loads of time and drastically improve IKEA’s reputation.

Challenges
The technology is still very early in the research and development stage, so there is still a massive question mark over its feasibility. 4-D-printed objects also need a stimuli such as heat or water in order to transform shapes. Controlling this stimuli — and in-turn controlling the behavior of the 4-D-printed object — could prove to be difficult.

Neuromorphic hardware

As far as artificial intelligence has come, there are still occasions when Siri answers simple commands like, “Call mom,” with, “Call mom what?” This is because most of today’s AI, as intelligent as it is, is only programmed by engineers to make decisions like a human, not independently think like a human on its own.

This is where neuromorphic hardware could help. Neuromorphic computing is concerned with mirroring the actual human nervous system to allow machines to perceive and analyze an environment. This will give computers the processing ability to make decisions on their own without a computer scientist needing to tell the computer how to respond to a certain stimulus.

Why it may be important
Mastering this could be crucial if computer scientists ever want the “intelligence” component of AI to be more prevalent than the “artificial” component. This is why companies like Intel are working so hard to advance the field.

Challenges
Replicating the human brain requires fully understanding the human brain, and neurologists are nowhere close to this. In essence, neuromorphic computing advancements may be at the mercy of neuroscience advancements.

Brain-computer interface

What is it?
Everyone has at some point considered whether or not Elon Musk is a robot among humans, and although this might not be true, he is actively trying to turn you into one.

Perhaps “robot” is a bit extreme, but his company Neuralink is connecting humans and computers through brain-computer interfaces (BCI), devices that allow the brain to control computers and computers to partially control the brain. Whenever our brain “thinks” something, small electric charges race across our neurons at speeds up to 268 mph, according to Stanford’s Virtual Lab’s Project. However, not all of these electric charges make it to their final destination in the body — some of them escape. A brain-computer interface (BCI) — a small device either attached to the scalp or implanted into the brain — can read these escaped signals and interpret what the brain wants to do, allowing humans to remotely control a computer by only thinking of actions.

Why it may be important
Who is likely to benefit the most from this technology: the severely disabled. Being able to control a computer using only brain power could allow individuals with motor impairments to live far more independently or paraplegics with robotic leg braces to walk. And because BCIs can send electric signals as well, researchers believe the technology could eventually allow the deaf to hear and the blind to see by simulating the electric signals these senses create.

Challenges
Worries about a BCI interfering with the brain’s normal processes or giving someone the ability to control someone else’s thoughts will continue to concern the public, and mainstream acceptance might be slow as a result. And while it’s okay to test out a 4D printer before it’s perfect, it’s not as easy to test out a device intended to send electric charges to the brain. But Musk is not alone in his fight to overcome these challenges – Facebook and MIT have both notably shown interest in the field as well.

Biotech — cultured or artificial tissue

What is it?
3-D printing has made leaps and bounds for consumers desperately needing a phone stand or abstract table art. However, in the future, it might actually save lives.

Researchers have made advancements in 3-D bioprinting that may soon allow them to artificially replicate human organs for such procedures as transplants. In addition, advancements in stem cell research has made it possible for scientists to increasingly grow tissue in a laboratory. Both of these innovations will continue to change the way doctors plan out treatments for their patients.

Why it may be important
Every year, 8,000 people die waiting for an organ transplant, according to the United Network for Organ Sharing. Efforts to combat this shortage have included recruiting more organ donors and even transplanting pig kidneys into humans. However, using biotech to build or cultivate tissue at a higher level could finally allow physicians to close this gap once and for all. Restoring damaged tissue to a fully-functioning state will become much more realistic as well.

Challenges
All foreign transplants in our bodies incite a response from our immune system, according to the Mayo Clinic. If it’s already common for the human body to reject other human hearts, getting it to accept a 3D-printed heart will be a challenge. And as with any medical product, getting safety-tested and certified will be a timely process.

Quantum computing

What is it?
Let me give you the CliffNotes version of quantum computing. There is no CliffNotes version of quantum computing.

There’s a concept in modern computing called Moore’s Law that states advancements in technology will allow the number of transistors on a computer chip to roughly double every two years. This is great news for anyone dying to see their Fortnite avatars in even higher resolution, but it creates a problem – if this trend continues, the amount of power required to run the world’s computers will be greater than the amount of power in the world by the year 2040, according to a report issued by the Semiconductor Industry Association.

This is why computer scientists worldwide are racing to solve the riddle that is quantum computing. While traditional computers store data in a slew of 1s and 0s, quantum computing uses the quantum mechanics principle of superposition to in essence store data as a 1, a 0, or a certain overlap of the two.

Why it may be important
The flexibility in this new data storage option allows much more information to be stored in a quantum bit — called a “qubit” (an Intel prototype pictured above) — than in a traditional bit. This will make computing much more efficient and less energy intensive, hopefully allowing you to both check Facebook and keep the lights on in the year 2040.

Challenges
The hard part of quantum computing is that ... it’s quantum computing. Not even Bill Gates fully understands it. Making qubits is still incredibly difficult, and getting them to interact in a way that allows for successful data storage is proving to be even more difficult. Other problems exist, such as their ability to self-correct when random errors occur and figuring out what material to make a quantum computing chip from. However, with more and more governments and companies investing in the technology, a quantum future may be possible within the next few decades.

Artificial general intelligence

What is it?
"Artificial intelligence" is a term start-ups throw in their company’s mission statement to act as a magnet for Silicon Valley investment. Artificial general intelligence (AGI) is a branch of AI that focuses on computers having the genuine intellectual capabilities of the human brain. While this was always the original intention of AI, the term’s more general usage to include programmed decisions that only appear autonomous has given rise to this more specific term.

With AGI, computer scientists aren’t coding machines to have responses to every imaginable inquiry, but instead giving machines the resources to make decisions on their own. It’s more of a general concept than a specific technology — less specific, for example, than neuromorphic hardware which is concerned with mirroring the human brain — since accomplishing this will involve the incorporation of countless technological concepts. However, a report by NEORIS has predicted computers will reach human-level intelligence by the year 2040, surely part of the reason Jeff Bezos made it a focus of Amazon’s recent MARS conference.

Why it may be important
It’s hard to imagine an industry that robots with human intelligence will not influence. Viable autonomous cars, better factory automation, stronger cybersecurity bots, and spot-on Netflix recommendations will all be made possible by AGI. And although it would be easy to assume this technology will deem many American jobs obsolete, a recent report found AI and robotics will actually create close to 60 million more jobs than it eliminates by 2022.

Challenge
As Apple’s Tim Cook has stated, technology of this sort could be incredibly dangerous (Elon Musk and Nobel Prize winners like the late Stephen Hawking share similar concerns). Not only this, but giving robots human-like intelligence would also require writing a moral code for how these robots should use their intelligence. This is sparking many debates, such as how a driverless car should choose who to kill in a potentially fatal car accident or how robots will respect the privacy of consumers. A thoughtful, meticulously-planned implementation of AGI into modern computers will probably be necessary to maximize consumer trust.

This story first appeared on CNBC.com. More from CNBC:

Pharmacy warns FDA of cancer-causing chemical found in widely used heart pill

Netflix is doing something it’s never done before, and that could signal a huge move

Rep. Maxine Waters asks Facebook to pause work on cryptocurrency Libra

Copyright CNBC
Contact Us