I have to write a essay for my English philosophy class. I need to write a essay by answering the questions. In the essay discussion, you must use citations and references from the notes provided and could use a few other outsource notes as well if needed.
Introduction
The history of AI leading up the AI Winter may seem like one of failure. But rather than take this as a sure sign that the Strong AI project is doomed, it may instead be more accurate to see this as evidence of the extraordinarily unreasonable optimism that the first AI researchers and the science journalists writing about them put on computers and technology – on the power of raw speed, brute force, and the inexorability of Moore’s Law. Perhaps we shouldn’t be that surprised or disappointed to find that what took Nature roughly 4.5 billions years to accomplish, human beings could not replicate in half a dozen decades. Taking the enormity of the task under consideration, perhaps we should think of the history of AI as a story of slow going but steady progress (as would be expected), held up not so much by the early 20th century technology as by the 17th century philosophy – the Cartesian model of the human mind (we will explore what it means to move beyond this when we discuss connectionism).
AI did recover from its first winter, and while there were others, there were less and less people so eager to pronounce it dead and buried. The humility that came from the mismatch between reality and the exaggerated claims of future success drove research to come up with more realistic aims and less ambitious designs that offered some level of practical usefulness here and now. Identifying the strengths of AI came from studying its weaknesses. Initially, AI tried to tackle general problems (e.g. GPS), and while its inference program worked very well, the problems it could solve had to be pre-digested (formalized) by humans, and special heuristics had to be manually programmed to limit the search space and try to avoid combinatorial explosion. The problems with commonsense knowledge led to attempts to limit the size of the universe the system had to deal with, such as the microworlds approach. The microworlds approach produced impressive results (e.g. SHRDLU), but the approach didn’t seem to scale up. The first real victory of AI was to find a way to apply the impressive capabilities of narrow field analysis and deep inference of microworlds to produce diagnosis systems simulating the abilities of experts – so-called expert systems.
Examples of expert systems
MYCIN (1976) – A precursor to PUFF – Identification of bacteria in blood and urine samples; prescription of
antibiotics, used solely for research purposes
PUFF (1979) – Interpretation of respiratory tests for diagnosis of pulmonary disorders
Cyc (1984 – today) – A massively ambitious project to produce an expert system with common sense knowledge. Researchers have spent decades manually coding millions of pieces of knowledge. The activity this week focuses on this project.
Components of an expert system
The internal structure of a typical expert system consists of three parts:, a natural language engine being sometimes a fourth:
1. knowledge base (typically if-then rules, can have certainty factor*)
2. database (facts)
3. inference engine (can use the facts in the knowledge base to produce additional facts that weren’t programmed, can have certainty factor*)
4. natural language engine (allowing more natural interaction)
Note: some sources will just mention two components: the knowledge base (in this case, it will include facts and rules) and the inference engine (aka reasoning engine, inference mechanism)
*Certainty factor:
If something is true with 100% confidence, we say it is true with a confidence factor of 1.0
If something is false with 100% confidence, we say it is false with a confidence factor of -1.0
There are then partial certainty factors of 10%, 20%, and so on, represented as 0.1, 0.2, …
Certainty factors can be used for evidence and inferences.
Example of an if-then rule (PUFF, 1979 – Pulmonary Function Analysis)
IF:
1. If the severity of obstructive airways disease of the patient is greater than or equal to mild, and
2. the degree of diffusion defect of the patient is greater than or equal to mild, and
3. the TLC observed/predicted of the patient is greater than or equal to 110, and
4. the observed/predicted difference in RV/TLC of the patient is greater or equal to 10
THEN:
1. There is strongly suggestive evidence (0.9) that the subtype of obstructive airways disease is emphysema, and
2. It is definite (1.0) that “OAD, Diffusion defect, elevated TLC, and elevated RV together indicate emphysema” is one of the findings
Modes of use: knowledge acquisition, consultation and explanation
Knowledge-acquisition
A knowledge engineer acts as a translator between experts and the expert system. He interviews the
experts trying to obtain:
1. vocabulary
2. concepts
3. facts
4. problems
5. solutions
6. if-then rules
These are then entered into the expert system, so the system can emulate the expert. Software applications known as knowledge-acquisition managers can also assist in this process (we will look at an example).
Consultation
The primary purpose of an expert system is to simulate an expert. When in consultation mode, the expert system is provided with information about the problem that it uses to try to provide a diagnosis of the problem, or suggesting courses of action to solve it.
Explanation
When confronted with a diagnosis or suggested course of action, the user of the system may want to understand how the system arrived at the answer, so it can query the system about its use of facts and inferences. This allows for correction/improvement of the system by an expert.
Example – The Mycin expert system
MYCIN was a medical diagnosis expert system that was developed in the 70s for research purposes (it was never used in medical practice). Given enough information about an infection, the system could reliably identify the bacteria that caused it with a measure of reliability. Although its diagnoses weren’t always accurate, its performance was superior to that of the average human expert.
In order to obtain sufficient information about the illness, the system interrogates the user for relevant information. MYCIN has to know which questions to ask. At each point, the question is determined by MYCIN’s current hypothesis (and answers to previous questions). MYCIN is a backward-chaining system. This means that in order to determine the cause of the patient’s illness, it looks for rules which have a THEN clause suggesting diseases. Then, it uses the IF clause to set up subgoals, looks for THEN clauses of other rules, and so on.
MYCIN was used together with an application called Teiresias, which is a knowledge-acquisition manager (Teiresias is a character of Greek mythology who was a clairvoyant). It helps experts formulate rules for Mycin and provides explanations for how conclusions are reached.
In case of a mistaken diagnosis, Teiresias can lead an expert through the reasoning MYCIC followed until the point where the origin of the wrong diagnosis is discovered. Existing rules can be ammended, and/or new rules can be considered. If the new rules are compatible with existing rules, they are permanently added to the knowledge base, thus improving the performance of the expert system, and making its behaviour closer to that of the human expert.
Using MYCIN was a slow process because the user (typically a doctor who was not an expert on bacterial infections) had to manually answer its many questions. Today, the system could be integrated with medical databases, reducing or eliminating the need for manual entering of information. In third world countries, where experts may not be available at all, access to a system such as MYCIN could save many lifes.
Advantages of expert systems
Perhaps one of the most impressive capabilities of expert systems have to do with their ability to acquire new facts that they were not explicitly “taught” by using their current collection of facts and rules, through a process of deduction. This provides an interesting counter-argument to those who claim that computers cannot know more than what their programmers know.
After you finish reading these notes, read the additional course notes file in this module’s folder that focuses explicitly on this.
Problems with expert systems: abductive reasoning, coding knowledge, responsibility, common sense
Abductive reasoning
It is tempting to think that when expert systems reason from facts and if-then rules, their calculations are always flawless if the facts and rules are correct. It is not so. The reason for this is that in many expert systems (such as MYCIN), abductive reasoning is used. This is also called Inference to the Best Explanation (IBE), and refers to the ability to identify the most plausible explanation (produce a hypothesis) that accounts for a set of data. One example is when a doctor identifies a disease as the most likely explanation for a set of symptoms. Abductive reasoning is problematic because it is inherently uncertain.
So we can more easily see what abductive reasoning is, and why it is uncertain, consider first the more familiar deductive reasoning:
Consider the two premises and the conclusion:
1. If P is true, then Q is true
2. P is true
Conclusion: If 1 and 2 are true, the conclusion is that Q is true – with absolute certainty. We say that deductive reasoning is truth-preserving, because the truth of the premises guarantee the truth of the conclusions.
Example:
1. If Socrates is a man, then Socrates is mortal
2. Socrates is a man
Conclusion: Socrates is mortal
Computers are very good at deductive reasoning, and while they stick to deductive reasoning, they can reach deep deductions that their human programmers were never aware of. However, expert systems often need to work the other way around. For instance, although diseases cause symptoms, expert systems need to reason backwards from symptoms to diseases (e.g. if symptoms x and y, then probably disease A), and there is no guarantee that a particular set of symptoms was caused by any particular disease. The computer will need to make an educated guess. This is abductive reasoning:
1. If P is true, then Q is true
2. Q is true
Conclusion: If 1 and 2 are true, the conclusion is that P is true – but this conclusions does not always hold!
For instance:
1. If it rains, Peter will get wet
2. Peter is wet
Conclusion: Therefore, it must be raining
Although the conclusion is plausible, there are plenty of other possible explanations for Peter being wet. Abductive reasoning is not truth-preserving – that is, the truth of the premises does not guarantee the truth of the conclusion.
So why is abductive reasoning a problem for expert systems? After all, humans also use it. Unfortunately, it is not quite clear how humans do it. It appears to be a fairly “creative” process, for lack of a better word, and it seems difficult to replicate this in a computer. Hypotheses can be ranked by their plausibility (this is what MYCIN does), but it is difficult to provide the computer with the tools to make that judgment. False positives (in the case of MYCIN, diagnosing a disease with weak evidence) and false negatives (not diagnosing a disease from being too conservative) are a problem.
Coding expert knowledge as if-then rules
Potential problems:
1. There may be hundreds or thousands of rules, and it may be very time-consuming to extract all the rules from the expert
2. The expert may not be aware of all the rules he uses
3. Not all of the expert’s knowledge may be reducible to if-then rules (philosophical problem – recall Dreyfus’s argument of knowing that vs. knowing how + abductive reasoning)
Potential solutions
Since humans are not particularly good at writing rules, attempts have been made to get computers to write their own rules, based on examples of decisions made by human experts. In the case of PLANT/ds (Michalski & Chilausky, 1981, Illinois), machine-derived rules were far more effective at coming to appropriate answers. How? By using an inductive learning program. Induction is when you generalize a principle from multiple observations (from observing the sun rise 100 times, you may conclude that the sun rises every day).
Responsibility (ethics)
If the expert system makes a mistake, who is responsible? The doctor who used it? The knowledge engineer? The programmers? The managers of the company that hired them? (we will discuss ethical issues further on in the course)
Common sense
Since we have already discussed how common sense problems affect AI software, we will not discuss this issue this week, but make sure you read the John McCarthy texs (readings) since it will be helpful for the activity.
—
Expert systems (aka knowledge-based systems) still operate under the paradigm of Classical AI (reasoning as symbol manipulation), but nevertheless mark an important change in the history of AI, from grand plans and hyperbolic claims about future achievements that seemed to lead nowhere, to more modest approaches focusing on the strengths of what had been learned. AI researchers were now equipped with a vast amount of knowledge about what does not work, and although they didn’t always know precisely why it didn’t work, they managed to avoid obvious dead-ends that didn’t seem at all obvious before, and began to find ways to make the most of what could be done.
Remember to have a look at the additional notes file.
Activity
Watch the video in which Douglas Lenat discusses the uses of Cyc to improve search engine searches.
You can watch until 10:20, when he says that if you have entered enough of these sentences, the system will be able to understand, or
continue until minute 20 or so, and you will be able to see a few examples of how Cyc can help.
video
https://www.youtube.com/watch?v=gAtn-4fhuWA
According to Douglas Lenat, Cyc can perform “semantic search”. Google cannot reply to questions such as “What is the weather in the capital of France?”, although it has access to web sites with information on the weather in Paris. It’s search engine is syntactic (it looks for words). A search engine using Cyc, however, could answer this question correctly, because it would have the information that Paris is the capital of France as part of its database of commonsense knowledge. Douglas Lenat also shows how it could find a picture of a “man smiling” by identifying a caption that describes a man playing with his daughter. Cyc can consult its database and determine that parents spending time with their children will often be happy, and happy people often smile.
ESSAY QUESTIONS:
In answering the question, show that you understand the pros and cons of expert systems. Remember, your essay needs to have a clear thesis.
In the discussion, focus on the following sub-topics:
Does Cyc have knowledge, as Lenat claims, in a way that MYCIC does not? Is Lenat solving the problem of common sense, as discussed by John McCarthy (see readings)?
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more
Recent Comments