Sie sind auf Seite 1von 47

AI Building Blocks

AI Building Blocks
Ten building blocks are essential to
designing and assembling AI
systems.
Vendors provide the basic
functionality that each building
block possesses, but companies
often modify blocks to create
customized applications.
The simplest AI use cases often
consist of a single building block,
but over time they often evolve to
combine two or more blocks.
The exhibit on right side organizes
building blocks according to
whether they pertain primarily to
data, to processing, or to action.
Machine vision
• It is the classification and tracking of real-world objects based on visual, x-
ray, laser, or other signals.
• Optical character recognition was an early success of machine vision, but
deciphering handwritten text remains a work in progress.
• The quality of machine vision depends on human labeling of a large
quantity of reference images.
• The simplest way for machines to start learning is through access to this
labeled data.
• Within the next five years, video-based computer vision will be able to
recognize actions and predict motion—for example, in surveillance
systems.
Speech recognition
It involves the transformation of auditory signals into text. In a relatively quiet
environment, such applications as Siri and Alexa can identify most words in a
general vocabulary. As vocabulary becomes more specific, tailored programs
such as Nuance’s PowerScribe for radiologists become necessary. We are still a
few years away from producing a virtual assistant that can take accurate notes
in noisy environments with many people speaking at the same time.
Natural-language processing
It is the parsing and semantic interpretation of text. This capability recognizes
spam, fake news, and even sentiments such as happiness, sadness, and
aggression. Today, NLP can provide basic summaries of text and, in some
instances, infer intent. For example, chatbots attempt to categorize callers
based on what they perceive to be the callers’ intention. NLP is likely to improve
significantly over the next several years, but a full understanding of complex
texts remains one of the holy grails of artificial intelligence.
Learning from data
• It is essentially machine learning—the ability to predict values or classify
information on the basis of historic data.
• While machine learning is an element in other building blocks, such as machine
vision and NLP, it is also a building block in its own right.
• It is the basis of systems such as Netflix’s movie recommendations, cybersecurity
programs that employ anomaly detection, and standard regression models for
predicting customer churn, given previous churn data.
• One challenge in business applications involves removing human bias from data.
• Systems designed to identify fraud, predict crime, or calculate credit scores, for
example, encode the implicit biases of agents, police officers, and bank officials.
• Cleaning the data can be challenging.
• Finally, many machine learning models today are inherently black boxes.
• Data scientists may need to design transparency into such systems, especially in
regulated environments, even if doing so involves some tradeoffs in performance.
• Because of the intensive ongoing research in this field, transparency is likely to
improve in the next five years.
Planning and exploring agents
• can help identify the best sequence of actions to achieve a goal.
• Self-driving cars rely heavily on this building block for navigation. Identifying the best
sequence of actions becomes vastly more difficult as additional agents and actions enter the
picture.
• A fast-growing subfield, reinforcement learning, emphasizes receiving an occasional hint or
reward rather than explicit instructions.
• Reinforcement learning was instrumental in Google DeepMind’s success in the game of Go
and is also closely associated with the way the human brain learns through trial and error.

Image generation
It is the opposite of machine vision; it creates images based on models.
Still in its infancy, this building block can complete images in which the background is
missing, for example, or can alter a photograph to render it in the style of, say, Vincent van
Gogh.
Image generation is the engine behind virtual- and augmented-reality tools such as
Snapchat’s masks. It is currently an active M&A target for large tech companies.
• Speech generation covers both data-based text generation and text-
based speech synthesis.
• Alexa exemplifies the capabilities of text-to-speech generation today.
• This building block is starting to allow journalism organizations to automate
the writing of basic sports and earnings reports, such as game summaries and
financial news releases.
• Within the next five years, speech generation will likely be able to incorporate
rhythm, stress, and intonations that make speech sound natural.
• Music generation will become more personalized in the near future, too.
• Handling and control refers to interactions with real-world objects.
• For example, robots already learn from humans on the factory floor, but they
have trouble with novel or fluid tasks such as slicing bread or feeding elderly
people.
• As companies globally pour money into this field, robots should become much
better at picking up novel items in warehouses and displaying fluid, humanlike
motion and flexibility.
Navigating and movement
• covers the ways in which robots move through a given physical
environment.
• Self-driving cars and drones do reasonably well with their wheels and
rotors, but walking on legs—especially a single pair of legs—is a much
more difficult challenge.
• Robots that can fluidly climb stairs or open doors will not arrive for a
few more years.
• Four-legged robots require less balance, however, and current models
are already able to navigate environments that are effectively
inaccessible to wheeled vehicles.
BCG-Henderson-Institute-Inline-GRN-RGB-350
The AI Knowledge Map
An architecture to access knowledge on AI and follow emergent
dynamics, a gateway of pre-existing knowledge on the topic that will
allow you to scout around for additional information and eventually
create new knowledge on AI also known as AI Knowledge Map (AIKM)
(Francesco Corea, Decision Scientist and Data Strategist
https://www.kdnuggets.com/2018/08/ai-knowledge-map-classify-ai-
technologies.html).
They developed the AIKM with strategic innovation consultancy Axilo,
for activities on their Chôra platform
Contd….The AI Knowledge Map
• On the axes, you will find two macro-groups, i.e., the AI Paradigms and the AI Problem Domains.
• The AI Paradigms (X-axis) are the approaches used by AI researchers to solve specific AI-related
problems (it includes up to date approaches).
• On the other side, the AI Problem Domains (Y-axis) are historically the type of problems AI can
solve.
• In some sense, it also indicates the potential capabilities of an AI technology.
• identified the following AI paradigms:
• Logic-based tools: tools that are used for knowledge representation and problem-solving
• Knowledge-based tools: tools based on ontologies and huge databases of notions, information, and rules
• Probabilistic methods: tools that allow agents to act in incomplete information scenarios
• Machine learning: tools that allow computers to learn from data
• Embodied intelligence: engineering toolbox, which assumes that a body (or at least a partial set of functions
such as movement, perception, interaction, and visualization) is required for higher intelligence
• Search and optimization: tools that allow intelligent search with many possible solutions.
• These six paradigms also fall into three different macro-approaches, namely Symbolic, Sub-
symbolic and Statistical (represented by different colors above).
• Briefly, the Symbolic approach states that human intelligence could be reduced to symbol
manipulation, the Sub-symbolic approach is one that no specific representations of knowledge is
provided ex-ante, while the Statistical approach is based on mathematical tools to solve specific
sub-problems.
Contd….The AI Knowledge Map
• The vertical axis instead lays down the problems AI has been used for, and the
classification here is quite standard:
• Reasoning: the capability to solve problems
• Knowledge: the ability to represent and understand the world
• Planning: the capability of setting and achieving goals
• Communication: the ability to understand language and communicate
• Perception: the ability to transform raw sensorial inputs (e.g., images, sounds, etc.) into
usable information.
• The patterns of the boxes divide the technologies into two groups, i.e., narrow
applications and general applications.
• The words used are on purpose
• For anyone getting started in AI, knowing the difference between Weak/Narrow
AI (ANI), Strong/General AI (AGI), and Artificial Super Intelligence (ASI) is
paramount.
• For the sake of clarity, ASI is simply a speculation up to date, General AI is the
final goal and holy grail of researchers, while narrow AI is what we really have
today, i.e., a set of technologies which are unable to cope with anything outside
their scope (which is the main difference with AGI).
Contd….The AI Knowledge Map
• The two types of lines used in the graph (continuous and dotted).
• However, at the same time, the difference here outlines technologies that can
only solve a specific task (generally better than humans — Narrow applications)
and others that solve multiple taskstoday or in the future and interact with the
world (better than many humans — General applications).
• In the map, the different classes of AI technologies are represented.
• So how do you read and interpret the map? Let me give you two examples.
• If you look at Natural Language Processing, this embeds a class of algorithms that
use a combination of a knowledge-based approach, machine learning and
probabilistic methods to solve problems in the domain of perception.
• At the same time though, if you look at the blank space at the intersection
between Logic-based paradigm and Reasoning problems, you might wonder why
there are not technologies there. What the map is conveying is not that a method
does not categorically exist that can fill a space, but rather when people approach
a reasoning problem they prefer to use Machine Learning, for instance.
Here is a list of technologies:

• Robotic Process Automation (RPA): technology that extracts the list of rules and actions to perform by watching the user doing a certain task
• Expert Systems: a computer program that has hard-coded rules to emulate the human decision-making process. Fuzzy systems are a specific example
of rule-based systems that map variables into a continuum of values between 0 and 1, contrary to traditional digital logic which results in a 0/1
outcome
• Computer Vision (CV): methods to acquire and make sense of digital images (usually divided into activities recognition, images recognition,
and machine vision)
• Natural Language Processing (NLP): sub-field that handles natural language data (three main blocks belong to this field, i.e., language
understanding, language generation, and machine translation)
• Neural Networks (NNs or ANNs): a class of algorithms loosely modeled after the neuronal structure of the human/animal brain that improves its
performance without being explicitly instructed on how to do so. The two majors and well-known sub-classes of NNs are Deep Learning (a neural net
with multiple layers) and Generative Adversarial Networks (GANs — two networks that train each other)
• Autonomous Systems: sub-field that lies at the intersection between robotics and intelligent systems (e.g., intelligent perception, dexterous object
manipulation, plan-based robot control, etc.)
• Distributed Artificial Intelligence (DAI): a class of technologies that solve problems by distributing them to autonomous “agents” that interact with
each other. Multi-agent systems (MAS), Agent-based modeling (ABM), and Swarm Intelligence are three useful specifications of this subset, where
collective behaviors emerge from the interaction of decentralized self-organized agents
• Affective Computing: a sub-field that deal with emotions recognition, interpretation, and simulation
• Evolutionary Algorithms (EA): it is a subset of a broader computer science domain called evolutionary computation that uses mechanisms inspired by
biology (e.g., mutation, reproduction, etc.) to look for optimal solutions. Genetic algorithms are the most used sub-group of EAs, which are search
heuristics that follow the natural selection process to choose the “fittest” candidate solution
• Inductive Logic Programming (ILP): sub-field that uses formal logic to represent a database of facts and formulate hypothesis deriving from those data
• Decision Networks: is a generalization of the most well-known Bayesian networks/inference, which represent a set of variables and their probabilistic
relationships through a map (also called directed acyclic graph)
• Probabilistic Programming: a framework that does not force you to hardcode specific variable but rather works with probabilistic models. Bayesian
Program Synthesis (BPS) is somehow a form of probabilistic programming, where Bayesian programs write new Bayesian programs (instead of
humans do it, as in the broader probabilistic programming approach)
• Ambient Intelligence (AmI): a framework that demands physical devices into digital environments to sense, perceive, and respond with context
awareness to an external stimulus (usually triggered by human action).
Contd….The AI Knowledge Map

• a strong Pareto principle emerges here, i.e., 80% (if not more) of
current efforts and results are driven by 20% of the technologies
pictured in the map (namely, deep learning, NLP, and computer
vision), yet also sure that having a full spectrum might help
researchers, startups, and investors.
Artificial Intelligence Classification Matrix
• In terms instead of classifying different companies operating in the
space, there might be several different ways to think around machine
intelligence startups (e.g., the classification proposed by Bloomberg
Beta investor Shivon Zilis in 2015 is very accurate and useful for this
purpose).
• a four-major-clusters categorization:
• Academic spin-offs: these are the more long-term research-oriented
companies, which tackle problems hard to break. The teams are usually really
experienced, and they are the real innovators who make breakthroughs that
advance the field;
• Data-as-a-service (DaaS): in this group are included companies which collect
specific huge datasets, or create new data sources connecting unrelated silos;
Contd….Artificial Intelligence Classification Matrix

• Model-as-a-service (MaaS): this seems to be the most widespread class of


companies, and it is made of those firms that are commoditizing their
models as a stream of revenues. They can appear in three different forms:
• Narrow AI — a company that focus on solving a specific problem through new data,
innovative algorithms, or better interfaces;
• Value extractor — a company that uses its models to extract value and insights from
data. The solutions usually provided might either integrate with the clients’ stack
(through APIs or building specifically on top of customers’ platform) or otherwise
full-stacks solutions. All the models offered can be trained (operative models) or to
be trained (raw models);
• Enablers — a company that is enabling the final user to do either her own analysis
(all-in-one platforms), or allowing companies to make daily workflows more efficient,
or eventually unlocking new opportunities through the creation of intermediate
products (e.g., applications).
Contd…..Artificial Intelligence Classification Matrix

• Robot-as-a-service (RaaS): this class is made by virtual and physical agents


that people can interact with. Virtual agents and chatbots cover the low-cost
side of the group, while physical world systems (e.g., self-driving cars, sensors,
etc.), drones, and actual robots are the capital and talent-intensive side of the
coin.
• The results of this categorization can be summarized into the
following matrix, plotting the groups with respect to short-term
monetization (STM) and business defensibility.
Artificial Intelligence classification matrix
Information processing
• covers all methods of search, knowledge extraction, and unstructured text
processing for the purpose of providing answers to queries.
• Closely related to NLP, this building block involves searching billions of
documents or constructing rudimentary knowledge graphs that identify
relationships in text.
• (Using data from the Wikipedia entry for Angela Merkel, such a graph can tag
Merkel as a woman, the chancellor of Germany, and someone who has met
Donald Trump.)
• It also might involve semantic reasoning—for example, determining that
Trump is president of the US from the sentence “Trump is the Merkel of the
US.”
• Despite the rapid growth of knowledge databases, this type of learning based
on reasoning is likely to remain rudimentary for the next few years.
Artificial Intelligence classification matrix
• Starting from the more viable products, the MaaS are the companies with the
highest potential to monetize their products in the short term, but also the less
defensible.
• DaaS on the other side is way less replicable, and highly profitable anyway.
• Academic spin-offs are the long bet, which is based on solid scientific research
that makes them unique but not valuable form day one.
• Finally, RaaS companies are the ones who might face more problems, because of
high obsolescence in hardware components and difficulties in creating the right
interactive interfaces.
• This classification is not intended to rank any business based on how good they
are, and it does not imply that specific companies belonging to specific classes
are not going to be profitable or successful (e.g., X.ai is a high profitable company
with a great product into the RaaS area).
• It is nothing more than a generalization tool useful to look at the sector through
the correct lenses.
Top 5 Chatbot Ecosystem Maps Compared

• Chatbots are one of the hottest areas of Artificial Intelligence and we


explained them in detail before.
• Social media revolutionized customer service by enabling customers to
digitally reach any brand at any time.
• However, social also meant that companies needed to cover numerous
channels effectively. Then came chatbots.
• Companies are anxious to take advantage of 24/7 functioning, intelligent,
self improving chatbots handling most queries and transfering customers
to live agents when needed.
• Chatbots reduce customer service costs and increase customer satisfaction.
• Given the potential value of chatbots, it comes as no surprise that an
ecosystem is developing around chatbots.
Top 5 Chatbot Ecosystem Maps Compared
Contd… Chatbot Ecosystem Maps
• At the heart of the chatbot ecosystem are chatbots themselves so we put
companies that build chatbots at the center of the ecosystem.
• You can work with 2 types of companies to provide a chatbot solution:
• End-to-end solution providers: Large companies generally need a relatively complex
chatbot solution and do not want to invest management time and engineering effort
to build such a solution in-house.
• Therefore, a significant number of enterprises are working with end-to-end solution
providers that work with the client team to understand requirements, process
customer data and use it to train chatbots and maintain chatbots to ensure customer
satisfaction. These companies leverage latest developments in machine learning,
deep learning and Natural Language Processing (NLP) to create chatbots. While bots
still can not create conversational experiences like human conversations, they are
beginning to handle basic customer queries.
• Self-Service solution providers: Smaller businesses have less demanding
requirements and a much smaller budget for their chatbots. For these companies, a
self-serve solution is the right solution most of the time. Their technical personnel
can leverage existing APIs or self-service tools to build a chatbot in a matter of days.
Or bot developers at large companies may want to use self-service solutions to build
their solution on top of an existing framework to save time.
• Top 60 chatbot vendors (https://blog.aimultiple.com/chatbot-companies/)
Contd…Chatbot Ecosystem Maps
• Now that you know how to quickly build your chatbot, you need to
get familiar with testing and analyzing its performance:
• Chatbot analytics: Analyze how customers are interacting with your
chatbot. See what you can use as chatbot analytics.
• Chatbot testing: Semi automated and automated testing frameworks
facilitate bot testing. To find out how you can test your bot.
• Bot platforms: Once your bot is ready, it’s time to let it loose. Viral
and useful bots have managed to acquire loyal followings in platforms
like Facebook Messenger and Slack. For example a successful virtual
assistant, meekan acquired a loyal following on Slack.
Oreilly’s market map
• There is more than one way to categorize
the chatbot ecosystem
• With interest in chatbots increasing, we are
not the only ones attempting to categorize
the chatbot ecosystem. Let’s start with a
relatively simple one. because it shows the
ownership relations across the ecosystem
but it includes very few players:
BI Intelligence also prepared a ecosystem graph:
Chatbot
developer Key
Reply prepare
d the most
exhaustive but
also the most
complicated
chatbot
ecosystem
map :
Recast.ai publis
hed a relatively With
simplistic map
of the bot significant
ecosystem in
2018. interest and
Interestingly,
the later the
investments,
map, the this ecosystem
simpler it is
getting. Part of will likely
it is due to the
fact that by continue to
2019 chatbots
are no longer grow and
living up to the
hype about
evolve. But
them in 2016 there’s a lot
and 2017.
more to AI in
business.
Case assignments
• Highlights Evie’s features and functions as well as Evie.ai’s entrepreneurial journey and
challenges
1. The Genesis of Evie
a. What factors in the global and local environment are conducive to the development of AI?
b. How did these factors support the founders of an AI startup like Evie.ai?
c. What are some obstacles in the Singapore startup environment that may have hindered or slowed Evie.ai’s
progress?
2. Anatomy of an AI assistant
a. Based on the information provided in the case, describe and discuss the technologies at the heart of Evie.
b. In your opinion, would Evie pass the Turing Test ?
4

c. One might expect Evie, an AI personal assistant, to resemble “Hollywood AI”, i.e. AI tools seen in
movies. Discuss the drivers that may lead users to have such expectations.
3. Target market and business model
a. Discuss the costs and benefits of offering Evie as a service.
b. Is there an alternative business model that may be better suited to the needs of the target market and
to Evie.ai?
c. What are some other opportunities and features that Evie.ai could explore to grow its business?
• Discuss the future of work in the face of AI and automation, focusing
particularly on issues and challenges associated with job and task
redesign
1. Ethics, Challenges, and the Future
a. How should Evie navigate ethical boundaries in performing its job? Could Evie learn to
tell a white lie, e.g., that all your time slots are full? Should Evie learn to lie?
b. What are the major benefits of using Evie in the workplace?
c. What are the major challenges, issues, and problems in using Evie in the workplace?
What other tasks do you think Evie should be able to perform in the future (if there is
technology available to support this)?
What tasks do you think Evie should not do, even if there is technology available to support this?
What tasks do you think should still performed by humans regardless of the state of our
technology?
Why use blockchain? Why not just use a more
basic web app to sell AI components?
• By using blockchain, AI agents can subcontract to one another and
receive instant payment via AGI tokens.
• This would not be possible, or would at least be impractical, with a
regular web app.
• The following video provides a solid high-level overview of how
blockchain enables a decentralized marketplace for AI:
• Ben Goertzel—SingularityNET—World Blockchain Forum
How might SingularityNET accelerate AGI development?
• Nearly all businesses could benefit from AI, but only tech giants and very large corporations could
afford to create custom AI needed for their businesses.
• Many sophisticated AI tools developed by graduate students and independent researches resided
in Github repositories that were not accessible by businesses.
• Even if they could be accessed, most business did not have the talent required to bridge the AIs to
their networks and systems.
• Additionally, machine-learning AI required large data sets that were beyond the capabilities of
most AI developers.
• SingularityNET would provide an easy way for these developers to monetize their AI via agents.
• As more AI developers are drawn to SingularityNET for both financial reasons and for validation of
their AI nodes via customer ratings, the number of AIs will increase, and the collective intelligence
of the network will increase.
• As a network of AIs work together to become a meta-AI (as Goertzel calls it), SingularityNET
moves closer to achieving AGI.
• The following video includes an interview with Ben Goertzel at the Malta Blockchain Summit on
November 3, 2018:
• Ben Goertzel, CEO of SingularityNET discusses his projects in artifcial general intelligence
Should we be accelerating
development of AGI given its
potential for misuse?
PROS AND CONS OF ARTIFICIAL INTELLIGENCE
• Artificial intelligence (AI) is one such technology that is gaining
momentum and hype.
• With technology becoming a part of our everyday lives, AI has
become a topic of debates and discussions where technocrats
consider it as a blessing and for some, it is a disaster. Having said that,
we are still unsure about the future of artificial intelligence.
• Is artificial intelligence a threat or a blessing?
Artificial General Intelligence- Humanity's Last Invention - Ben Goertzel
• We won’t be wrong in saying that giving power to machines will ease
out certain problems but will also create some destructions at the
same time. Let’s dive into it and take a look at the benefits and risks
of artificial intelligence
Here are the advantages of AI:
• Less Room for Errors
As decisions taken by a machine are based on previous records of data and the set of algorithms, the chances of
errors reduce. This is an achievement, as solving complex problems that require difficult calculation, can be done
without any scope of error.
Have you heard of digital assistants? Advanced business organizations use digital assistants to interact with
users, something that helps save them time. This helps businesses fulfil user demands without keeping them
waiting. They are programmed to give the best possible assistance to a user.
• Right Decision Making
The complete absence of emotions from a machine makes it more efficient as they are able to take the right
decisions in a short span of time. The best example of this is its usage in healthcare. The integration of AI tools in
the healthcare sector has improved the efficiency of treatments by minimizing the risk of false diagnosis.
• Implementing AI in Risky Situations
Certain situations where human safety is vulnerable, machines that are fitted with predefined algorithms can be
used. Nowadays, scientists are making use of complex machines to study the ocean floor where human survival
becomes difficult.
This is one of the biggest limitations that AI helps to overcome.
• Can Work Continuously
Unlike humans, machine does not get tired, even if it has to work for consecutive hours. This is a major benefit
over the humans, who need rest time to time to be efficient. However, in the case of machines, their efficiency is
not affected by any external factor and it does not get in the way of continuous work.
disadvantages of AI:

• pros include solving a broad array of societal problems and freeing us


from the need to work for a living.
• potential cons as highlighted by Elon Musk.
• Elon Musk Is Deeply Worried
Disadvantages of AI
• Expensive to Implement
When combining the cost of installation, maintenance and repair, AI is an expensive proposition. Those of who
have huge funds can implement it. However, businesses and industries that do not have funds will find it difficult
to implement AI technology into their processes or strategies.
• Dependency on Machines
With the dependency of humans on machines increasing, we’re headed into a time where it becomes difficult
for humans to work without the assistance of a machine. We’ve seen it in the past and there’s no doubt we’ll
continue seeing it in the future, our dependency on machines will only increase. As a result, mental and thinking
abilities of humans will actually decrease over time.
• Displace Low Skilled Jobs
This is the primary concern for technocrats so far. It is quite possible that AI will displace many low skilled jobs.
As machines can work 24*7 with no break, industries prefer investing in machines as compared to humans. As
we are moving towards the automated world, where almost every task will be done by the machines, there is a
possibility of large-scale unemployment. A real-time example of this is the concept of driverless cars. If the
concept of driverless cars kicks in, millions of drivers will be left unemployed in the future.
• Restricted Work
AI machines are programmed to do certain tasks based on what they are trained and programmed to do. Relying
on machines to adapt to new environments, be creative and think out of the box will be a big mistake. This is not
possible because their thinking zone is restricted to only the algorithms that they have been trained for.
is AGI going to be good for society ?

• cons will typically include themes of humans losing control of AGI and
AGI being used to wage war.
• Additionally, there are concerns that AGI could be controlled by a few
individuals or government entities that may not prioritize broader
societal benefits
• So, is AGI going to be good for society
• Debate on pros and cons
How open-source AGI development might mitigate the
probability of misuse of AGI
• After the discussion or debate, we should look to Ben Goertzel’s thoughts.
• In a 2012 Journal of Evolution and Technology article, Goertzel identifed
nine ways to bias open-source AGI toward friendliness:
1. Engineer the capability to acquire integrated ethical knowledge.
2. Provide rich ethical interaction and instruction, respecting developmental stages.
3. Develop stable, hierarchical goal systems.
4. Ensure that the early stages of recursive self-improvement occur relatively slowly
and with rich human involvement.
5. Tightly link AGI with the Global Brain.
6. Foster deep, consensus-building interactions between divergent viewpoints.
7. Create a mutually supportive community of AGIs.
8. Encourage measured co-advancement of AGI software and AGI ethics theory.
9. Develop advanced AGI sooner, not later

https://jetpress.org/v22/goertzel-pitt.pdf
What Happened?
• In a nutshell, Goertzel believed that the decentralized nature of AI
development enabled by SingularityNET would increase the likelihood
that AGI would not be put to bad use and that society would get a
more positive outcome in the long run.
• This case was written in late 2018 when SingularityNET was in internal
alpha testing. Now 16-12-2019 they have lunched the beta.
• Check SingularityNET’s update from website
• AI Marketplace
• https://www.toda.network/
• https://youtu.be/s0d8-dxbKEE
• https://www.acumos.org/
• https://www.akira.ai/ai-marketplace/

Das könnte Ihnen auch gefallen