In an unassuming building near downtown Chicago, Marc Gyongyosi and the members of IFM/Onetrack are growing and going from strength to strength.
Artificial intelligence has a basic principle: think simply. The words were written in simple handwriting on a piece of paper taped to the upstairs back wall of their two-story industrial building. However, what they're doing here with artificial intelligence is anything but simple.
#The future of artificial intelligence Artificial intelligence is shaping the future of humanity in almost every industry. It is already a major driver of emerging technologies such as big data, robotics and the Internet of Things, and will continue to play the role of technology innovator for the foreseeable future.
Using machine learning and computer vision to detect and classify various "security events," this shoebox-sized device hasn't seen them all, but it's seen a lot. Things like how the driver looks when operating the vehicle, how fast they are driving, where they are driving, the location of people around them and how other forklift operators are maneuvering their vehicles. IFM's software automatically detects safety violations - such as cell phone use - and notifies warehouse managers, allowing them to take immediate action. The main purpose of this is to prevent accidents and improve efficiency. Gyongyosi claims that just knowing that one of IFM's devices is watching has had a "huge impact."
Gyongyosi said: “If you think about cameras, they are really the richest sensor we have available at the moment, and the price is very interesting. Smartphones, cameras and image sensors have become very cheap these days, but we capture a lot of information. .From one image, we might be able to infer 25 signals; but six months later, we can infer 100 or 150 signals from the same image. The only difference is the software used to view the image... Each client Benefit from every other customer we bring on board as our system starts to see and learn more of the process and detect more of the important and relevant stuff."
IFM is just one of countless artificial intelligence innovators in this growing field. For example, of the 9,130 patents obtained by IBM inventors in 2021, 2,300 were related to artificial intelligence. Tesla founder and tech giant Elon Musk donated $10 million to non-profit research company OpenAI to fund ongoing research.
After an evolutionary period that began with “knowledge engineering” and was marked by sporadic hibernation for decades, technology has developed into machine learning based on models and algorithms, with an increasing focus on perception, reasoning and induction. . Now, artificial intelligence has taken center stage again like never before, and it’s not going to cede the spotlight any time soon.
Why is artificial intelligence important? Artificial intelligence is important because it is the basis of computer learning. With artificial intelligence, computers are able to tap into massive amounts of data and use their learned “intelligence” to make optimal decisions and discoveries in a fraction of the time it would take a human.
Modern artificial intelligence - more specifically, "narrow artificial intelligence". It uses a model trained on data to perform an objective function, often falling under the category of deep learning or machine learning—and there is almost no major industry that has not been affected. This is especially true in the past few years, as data collection and analysis have increased dramatically due to the powerful connectivity of the Internet of Things, the proliferation of connected devices, and faster computer processing speeds.
Some industries are at the beginning of their AI journey, others are seasoned travelers. Both go a long way. In any case, the impact of artificial intelligence on today’s life cannot be ignored.
But these advances — and many others — are just the beginning. There will be more to come.
David Vandegrift, chief technology officer and co-founder of customer relationship management company 4Degrees, said: "I think any assumption that the capabilities of intelligent software will reach its limit at some point is wrong."
With companies spending billions of dollars every year on AI products and services, and tech giants like Google, Apple, Microsoft, and Amazon spending billions of dollars to create these products and services, universities are incorporating AI into their curricula. The bigger part, and as the U.S. Department ups its AI game, big things are bound to happen. Some of these developments are moving toward full realization; some are merely theoretical and may remain so. All of this is disruptive, for better or worse, with no recession in sight.
Andrew Ng, former head of Google and chief scientist of Baidu, said in an interview with ZDNet: "Many industries will experience such a pattern: winter, winter, and then eternal spring. We may be in the eternity of artificial intelligence. Spring."
In a speech at Northwestern University, artificial intelligence expert Kai-Fu Lee advocated Artificial Intelligence technology and its upcoming impact, while also pointing out the side effects and limitations of Artificial Intelligence. Regarding the former, it warned:
"The bottom 90% of the population, especially the bottom 50% of the world's population by income or education, will be severely harmed by unemployment... A simple question, 'Procedure How does it work?' This is the possibility of artificial intelligence replacing jobs, because artificial intelligence can learn to optimize itself in daily tasks. And the more there are, the more objective the work will be, such as sorting things into the trash can, washing dishes, etc. Picking fruit and answering customer service calls—these are scripted tasks that are repetitive and routine. In 5, 10, or 15 years, they will be replaced by artificial intelligence."
With more than 100,000 units In the warehouses of Amazon, the online giant of robots and artificial intelligence, the picking and packing functions are still performed by humans, but that will change.
Lee’s sentiments were echoed recently by Infosys president Mohit Joshi, who told the New York Times: “People want to achieve huge things. Earlier, they had a 5% to 10% reduction in workforce. Incremental goals. Now they feel, 'Why can't we do this with only 1% of our people?'"
More optimistically, Lee emphasized that today's artificial intelligence is It is useless in both respects: it has no creativity and no capacity for empathy or love. Rather, it is “a tool to amplify human creativity.” What about the solution? Those with repetitive or routine jobs must learn new skills to avoid being eliminated. Amazon even provides funding to its employees to train for jobs at other companies.
Klara Nahrstedt, professor and director of computer science at the University of Illinois at Urbana-Champaign, said: “One of the absolute prerequisites for artificial intelligence to be successful in many fields is that we invest heavily in education to retrain people for new jobs. Training.”
Klara worries this won't happen widely or often. IFM's Gyongyosi is even more specific.
“People need to learn programming like they’re learning a new language,” Gyongyosi said. “They need to do it early because this is really the future. In the future, if you don’t know coding, you don’t know Programming, it's just going to get harder."
"While many people forced out of work by technology will find new jobs, it won't happen overnight. Just like America did during the Industrial Revolution During the transition from an agricultural to an industrial economy, which was largely responsible for the Great Depression, people eventually got back on their feet. However, the short-term impact was huge," Vandegrift said. "There was a gap between jobs disappearing and new jobs appearing. The transition is not necessarily as easy as people think." Mike Mendelson, a learner experience designer at NVIDIA, is a different kind of educator than Nahrstedt. He works with developers who want to learn more about artificial intelligence and apply it to business.
It said: “If they understand what the technology is capable of, and they understand the field very well, they will start to make connections and think, ‘Maybe this is an AI problem.’ This situation is more likely than ‘I It's more common to have a specific problem you want to solve. , this will happen in two areas: "reinforcement" learning, which deals with rewards and penalties rather than labeled data; and generative adversarial networks (GANs for short), which allow computer algorithms to be created rather than just by pitting two networks against each other. Evaluation. The former is typified by the Go capabilities of Google DeepMind's AlphaGo Zero as an example, and the latter takes raw image or audio generation based on the learning of specific topics such as celebrities or specific types of music.
On a larger scale, AI is expected to have a significant impact on sustainability, climate change, and environmental issues. Ideally, through the use of sophisticated sensors, cities will become less crowded, less polluted, and generally more livable .
Artificial Intelligence and Privacy Risks
Of course, AI’s reliance on big data already impacts privacy in a big way. Just look at Cambridge Analytica’s prank on Facebook or Amazon’s take on Alexa The eavesdropping are two of many examples of technology spiraling out of control. Critics argue that without proper regulations and self-imposed restrictions, the situation will get worse. In 2015, Apple CEO Tim Cook mocked rivals Google and Facebook's greedy data mining. "They're trying to learn everything they can about you and try to monetize it," Cook said in a 2015 speech. We think this is wrong. ”
“Advancing artificial intelligence by collecting large amounts of personal data is laziness, not efficiency,” Cook said “For AI to be truly smart, it must respect human values, including privacy. If we get this wrong, the dangers are profound. ”
Many people agree. In 2018, the UK-based human rights and privacy organizations Article19 and Privacy International published a paper stating that anxiety about artificial intelligence is limited to its daily functions, not like Catastrophic changes like that for robot overlords.
“If implemented responsibly, artificial intelligence can benefit society,” the authors write. “However, as with most emerging technologies, commercial and state use is indeed possible adversely affect human rights. ”
The authors acknowledge that the large amounts of data collected can be used to try to predict future behavior in benign ways, such as spam filters and recommendation engines. But there are also real threats to personal privacy and protection from discrimination. have a negative impact on their rights.
At the end of 2018, Stuart Russell, an internationally renowned artificial intelligence expert, jokingly (or not) stated when speaking at Westminster Abbey , which has formal agreements with journalists not to speak to them unless they agree not to include the 'Terminator' bot in their articles. Its quips reveal a clear disdain for Hollywood's depictions of far-future artificial intelligence, which tend to be overwrought and apocalyptic. What Russell calls "human-level AI," also known as general artificial intelligence, has long been the stuff of fantasy. But the possibility of it being realized in a short time or at all is very small.
Russell explained: "There are many major breakthroughs that need to be achieved before we can reach human-like artificial intelligence."
Russell also pointed out that artificial intelligence does not yet fully understand language. This illustrates the clear distinction currently between humans and AI: humans can translate machine language and understand it, while AI cannot translate human language. However, if AI can understand our language, then AI systems can read and understand all text.
"Once we have this ability, we can query all human knowledge, and it will be able to synthesize, integrate and answer questions that humans have never answered before," Russell added: "Because it doesn't read, Nor the ability to put together and connect the dots between things that have historically been separate."
This gives us a lot to think about. For that matter, simulating the human brain is extremely difficult, which is another reason why the future of AGI remains hypothetical. John Laird, a longtime professor of engineering and computer science at the University of Michigan, has conducted research in the field for decades.
"Our goal has been to try to build what we call the cognitive architecture that we think is inherent in intelligent systems," Laird said of work largely inspired by human psychology. "For example, One thing we know is that the human brain is not just a homogenous set of neurons. It is a real structure made up of different components, some of which are related to knowledge of how to do things in the world."
This is the so-called program memory. There is also knowledge based on general facts, namely semantic memory; and, another type is knowledge about previous experiences (or personal facts), called episodic memory. One project in Laird's lab involves using natural language instructions to teach robots simple games like chess and puzzles. These directives typically include a description of the objectives, an outline of legal measures, and scenarios for failure. The robot internalizes these instructions and uses them to plan its actions. As always, however, breakthroughs take time—slower than Laird and colleagues expected.
"Every time we make progress," Laird said, "we also gain a new understanding of how difficult it is."
Many leading figures in the field of artificial intelligence agree, and some even exaggerate, a nightmarish scenario that includes the so-called "Singularity," in which superintelligent machines take over and permanently change humanity by enslaving or wiping it out. exist.
The late theoretical physicist Stephen Hawking famously hypothesized: If artificial intelligence itself begins to design artificial intelligence that is better than human programmers, the result may be that "machines surpass us in intelligence by a snail's pace." Elon Musk believes and warns that AGI is the biggest threat to human existence. It said efforts to achieve this goal were like "summoning the devil". There is even concern that his friend and Google co-founder Larry Page may inadvertently guide the emergence of some "evil" things, despite his good intentions. For example, “a fleet of AI-enhanced robots capable of destroying humanity.” Even IFM’s Gyongyosi isn’t an alarmist when it comes to AI predictions, and he’s not ruling anything out. It says that at some point, humans will no longer need training systems; they will learn and develop on their own.
“I don’t think the methods we are currently using in these areas will lead to machines deciding to kill us,” Gyongyosi said. “I think maybe in five or 10 years I will have to reevaluate this. argument, because we will have different methods and ways of dealing with these things.
While killing machines will likely remain the stuff of fiction, many believe they will replace humans in various ways.
The Future of Humanity Institute at the University of Oxford has released the results of an artificial intelligence survey. Titled "When will artificial intelligence surpass human performance? Evidence from artificial intelligence experts", it contains 352 machine learning researchers' opinions on the future years. Estimates of artificial intelligence development in 2019.
There are many optimists in this group. The median of respondents said that by 2026, machines will be able to write school papers; by 2027, self-driving trucks will no longer need drivers; by 2031, artificial intelligence will outperform humans in retail; and by 2049 By 2053, artificial intelligence may become the next Stephen King; by 2053, it may become the next Charlie Teo. The most shocking part: By 2137, all human jobs will be automated. But what about humans themselves? Drinking drinks from robots under umbrellas, no doubt.
Diego Klabjan, a professor at Northwestern University and founding director of the Master of Science in Analytics program, considers himself an AGI skeptic.
It explained: "Currently, computers can only process more than 10,000 words. So, there are millions of neurons. But the human brain has billions of neurons, and they work in a very are connected together in interesting and complex ways, and the current state-of-the-art technology is just simple connections according to very simple patterns. Therefore, with existing hardware and software technology, from millions of neurons to billions Neurons, I don't think that's going to happen."
Klabjan also doesn’t believe in extreme scenarios — like, say, murderous robots turning the Earth into a smoldering hell. It's more concerned with machines - like war robots - being indoctrinated with wrong "motivations" by evil humans. Max Tegmark, a professor of physics at MIT and a leading researcher on artificial intelligence, said in a 2018 TED talk: "The real threat from artificial intelligence is not malice, as in silly Hollywood movies, but capabilities - what artificial intelligence enables. The goals are not consistent with our goals." This is also Laird's view.
I definitely don’t see a scenario where something wakes up and decides to take over the world, Laird said. “I think that’s science fiction, not a future outcome.”
Laird’s biggest concern is not evil AI per se, but “evil humans using AI as a false force multiplier” for many crimes, including bank robberies and credit card fraud. So while it's often frustrating at the pace of progress, AI's slow burn may actually be a blessing.
Laird said: "Understanding what we are creating and how we are integrating it into society may be exactly what we need."
But no one knows the exact answer.
Russell said in his Westminster speech: "Several major breakthroughs must be achieved and may soon be achieved." He quoted the nuclear power system proposed by British physicist Ernest Rutherford in 1917. The rapidly transforming effects of fission (the splitting of atoms), adding, "It's hard to predict when these conceptual breakthroughs will occur."
But whenever they do, he emphasizes being prepared if they do importance. This means starting or continuing a discussion about the ethical use of AGI and whether it should be regulated. This means working to eliminate data bias, which has a damaging effect on algorithms and is currently a major flaw in artificial intelligence. This means working to invent and enhance security measures that control technology. It also means having the humility to realize that just because we can do it, doesn’t mean we should.
"Most AGI researchers predict that AGI will be achieved within decades, and if we hit it unprepared, it could be the biggest mistake in human history. It could lead to a brutal global dictatorship, "Bringing unprecedented inequality, surveillance, suffering, and possibly even human extinction," Tegmark said in a TED talk: "But if we act cautiously, we may eventually enter a better future where everyone is better off - —The poor become rich, the rich become richer, and everyone is healthy and free to realize their dreams."
The above is the detailed content of The future of artificial intelligence: How will artificial intelligence change the world?. For more information, please follow other related articles on the PHP Chinese website!