COMP575+Intelligent+Agents

= Intelligent Agents Assignment 1 =

Date: 2/4/08

Student: Michael Weingarden

Course ID: COMP575MultiAgent Systems

Semester/Year: Spring 2008

Title/Topic: Project 1, RuleApp

Getting things going First, I wanted to share some of the effort I went through to get things setup. I spoke to Dr. Wolfe about my efforts and he thought that it would be worthwhile to share with others. Please provide me feedback on whether this information was valuable to you or not. I used to program quite a bit in Perl and Visual Basic, but I'm new to Java. So, I actually had quite a hard time with Artificial Intelligence because I wanted to use toolkits like JOONE as a jumping off point, but I had problems getting the source code to run from the Eclipse IDE. The problem was that I had a ton of warnings and errors. When I received the book, "Constructing Intelligent Agents Using Java," I had the same problem. I was able to pull the source code into Eclipse, but there were too many errors to debug or run the code from Eclipse. When I looked at all the warnings and errors, I was getting the hint that the problems were related to Java version or JDK version. However, I couldn't find any reference info that said here is how you upgrade old Java source code to work with the new JDK conventions. Of course, some of you, as you are reading this know the simple answer. I did not. Aside from having problems with Eclipse, I tried running the ABLE editor that came with the CD from the book and it wouldn't run properly. It might have something to do with the Windows Vista that I'm using or maybe a JRE version mismatch(?). Regardless, I plowed on. My first step in troubleshooting was to attempt to find a newer version of the source code. Not an easy task. The path was winding and double backed on itself several times. The author of the book really doesn't have anything at his web site and at the IBM web site, there were several places where ABLE was referenced and several of those were useless. Eventually, I found a place that seemed to be the clearing house for all things ABLE and although it was last updated in 2005, I was able to download a bunch of stuff, get the ABLE editor to run and found an Eclipse plug in as well. I've placed a link at Dr. Wolfe's course web page in the Links section. Unfortunately, I still wasn't able to get the source code to load into Eclipse without a bunch of errors. Eventually I ran across a reference that indicated there was a compliance "switch" that could be flipped within the Eclipse IDE. I found the switch, flipped it and discovered that I could debug and run the Java source code for RuleApp no problem. In case someone else suffers from the same problem, I'm including some snapshots of where you need to make the tweaks so that you can load third party Java source code without show stopping errors. Here's what I got before I changed the compliance settings: The red X's indicate errors that prevent debugging or running of the application. So, then I found the place to change the compliance settings in Eclipse: And I went back to a time when all things JDK 1.4 ran perfectly: And all my errors went away- although a bunch of warnings remained: So, finally I was able to work with the latest and greatest source code from IBM in the Eclipse IDE. Yea! Now, I'll proceed with my playing with the RuleApp. [|Back to Top] Forward Chaining through RuleApp First thing I wanted to try is a simple experiment of what happens when you try forward chaining through RuleApp. So, I starting off by entering setting values: And I wanted to set the value of each RuleVariable to be something that would end up looking like a Sedan: And finally I started the engine (so to speak): And here's the result that the RuleApp put out: code --- Setting all Vehicles Rule Base variables to null --- Starting Inferencing Cycle --- code code format="escaped" vehicleType value = automobile size value = medium num_wheels value = 4 num_doors value = 4 motor value = yes vehicle value = Sedan Testing rule bicycle Testing rule tricycle Testing rule motorcycle Testing rule sportsCar Testing rule sedan Testing rule miniVan Testing rule SUV Testing rule Cycle Testing rule Automobile -- Rules in conflict set: sedan(3), Automobile(2), Firing rule sedan Testing rule bicycle Testing rule tricycle Testing rule motorcycle Testing rule sportsCar Testing rule sedan Testing rule miniVan Testing rule SUV -- Rules in conflict set: Automobile(2), Firing rule Automobile Testing rule bicycle Testing rule tricycle Testing rule motorcycle Testing rule sportsCar Testing rule sedan Testing rule miniVan Testing rule SUV Testing rule Cycle Testing rule Automobile -- Rules in conflict set:
 * [|Getting things going]
 * [|Forward Chaining through RuleApp]
 * [|Backward Chaining through RuleApp]
 * [|Batch file to run RuleApp]
 * [|References]

vehicleType value = automobile size value = medium num_wheels value = 4 num_doors value = 4 motor value = yes vehicle value = Sedan --- Ending Inferencing Cycle --- code Sure enough, I got a sedan! [|Back to Top] Backward Chaining through RuleApp Backward chaining was pretty cool. When you backward chain, you're prompted for all the parameters. Note regarding images: I used the Windows Vista "Snipping Tool" to capture all the images on this page. However, above I saved as a GIF and below I saved as PNG. Obviously the PNG files are much better quality. FYI. So, I started by choosing Backwards Chaining: Then, I started the process and was prompted for number of wheels: Then, I was prompted for motor or no motor: Finally, RuleApp decided it had enough info to determine the vehicle and correctly reported "bicycle": Here's the full data dump from RuleApp : code format="escaped" --- Setting all Vehicles Rule Base variables to null --- Starting Inferencing Cycle ---

vehicleType value = null size value = null num_wheels value = null num_doors value = null motor value = null vehicle value = null Evaluating rule bicycle Evaluating rule Cycle +++ Could Not Find Solution for goal: num_wheels Rule Cycle is true, setting vehicleType: = cycle Evaluating rule Automobile Rule Automobile is false, can't set vehicleType +++ Could Not Find Solution for goal: motor Rule bicycle is true, setting vehicle: = Bicycle +++ Found Solution for goal: vehicle vehicleType value = cycle size value = null num_wheels value = 2 num_doors value = null motor value = no vehicle value = Bicycle --- Ending Inferencing Cycle --- code [|Back to Top] Batch file to run RuleApp Don't mean to insult anyone, but I made up a batch file to run RuleApp from the DOS command line (works even in Vista!). Thought I would share. For this to work, the RuleApp java files have to be in a directory called "rule" under the ciagent2e directory. Here's the batch file: code format="escaped" cd C:\Users\Michael\Desktop\ciagent\ciagent2e java rule.RuleApp code For you youngsters, just place this text in a text file and then change the extension to .bat [|Back to Top] References
 * [|Latest and Greatest ABLE stuff]

= Intelligent Agents Assignment 2 =

Date: 3/9/08

Student: Michael Weingarden

Course ID: COMP575MultiAgent Systems

Semester/Year: Spring 2008

Title/Topic: Project 2, CIAgent, PAManager , FileAgent , WeatherAgent

An Agent is Born I'm a little bit late on this one, so I decided to try to gold plate it. I went and looked at what other students had done and I had done much of the same thing up until this week. I installed and configured PAManager, I took FileAgent , User Notification Agent, Schedule Agent and Airfare Agent for test rides. Then, I read Dr. Wolfe's email about the student who had created his own agents. I started looking at the code and the structure of the agents and it looked like it wouldn't be that difficult to put my own agent together. So, I came up with the idea that it would be interesting to go and grab some weather data off the web and pop up an alert with the current temperature. The first thing I found was the Yahoo! Weather API which provides a variety of weather info in an RSS feed. Here's what the Yahoo! RSS feed looks like: Then, I looked around for an existing class or package that would parse the RSS feed. I looked at several possibilities, but the one that looked best was the Informa RSS Library for Java (see references below). I tried using the examples from the Quickstart and the FAQ, but they didn't work. So, I searched around the web for a bit. I did run across a couple of sites with examples of Informa usage, but one really helped me (again, see references below). The examples didn't do exactly what I needed- pick the temperature out of the RSS feed page- but they did pick some information out and so I was able to figure out how to use the correct methods to get the data I needed. Of course that was only the beginning because I still needed a way to get the zip code from the user and then get the temperature to the alert dialog. So, I figured that the FileAgent did most of what I needed for my WeatherAgent. So, I just copied all the FileAgent files ( FileAgent .java, FileAgentCustomizer .java and FileAgentBeanInfo .java) and changed the names from File... to Weather... Here's how you get the zip code to the WeatherAgent : Since the FileAgent had a way of asking for text that could be embedded in the alert, I just hacked into that portion of the code and used that as a way to get the zip code from the user (hopefully you). Then I took my little snippet of example code and made it into a class so I could use it to get information from other RSS feed pages in the future. I called my class GetTemp (not very general purpose name, oh well). Then I create a TestStarter .java file with a "main" procedure in order to test the GetTemp class. It took a while to get all the syntax right. I have not done much with object oriented programming in the past, so it took me a while to figure out how to pass the zip code from the WeatherAgent form into the GetTemp object. Then, I had to struggle a bit with getting the GetTemp object to return the temperature. Fortunately, there is a happy ending. I was able to get input and output and so I proceeded to modify WeatherAgent .java in order to actually pass the zip code to GetTemp and then display the alert to the user. Here is what the GetTemp class looks like: Here's my FileAgent/WeatherAgent hack that actually gets the temperature from Yahoo! and then displays it as an alert: Here's what PAManager looks like with my Weather Agent added: Here's the mod that had to be made to pamanager.properties: Here's the watch alert that results: And finally, here is a link to the whole application wrapped up in one big, executable JAR file (plus the pamanger.properties file, of course). Just download the zip file and put both the JAR file and the pamanager.properties file in the same directory. Then double click on the JAR file and the application should run- as long as you have a Java JRE installed- and you should! [] Oh, and the JAR was created by a neat Eclipse plug-in called Fat Jar Exporter which I have a link to in the references below. [|Back to Top] References
 * [|An Agent is Born]
 * [|References]
 * [|Informa RSS Utilities for Java]
 * [|Informa examples that work]
 * [|Yahoo! Weather RSS]
 * [|Fat Jar Exporter]

= Intelligent Agents Assignment 3 =

Date: 3/16/08

Student: Michael Weingarden

Course ID: COMP575MultiAgent Systems

Semester/Year: Spring 2008

Title/Topic: Project 3, InfoFilter, URLReaderAgent , et. al.

Getting Started I discovered that if I want to run the InfoFilterApp from within Eclipse, there is a missing file. The missing file is URLReaderAgentCustomizer .java. The CD includes the URLReaderAgentCustomizer .class file, so the app works fine from the command line. However, if you try to run from within Eclipse, you can't do it without the source file. I went back and looked around and couldn't find the file anywhere (bigus web site, Google, etc.). However, via Google, I noticed that Zabaronick mentioned the same problem. Unfortunately, the web site she referred to crashed a while ago and that file is no longer posted there. I sent emails to everyone concerned (Joseph Bigus, Zabaronick, and the guy Zabaronick referred to), but I was too impatient to wait. So, I downloaded a Java class decompiler and made my own file. I'm including it as a link in the Reference section. For those of you that didn't get to see my last assignment, I did take the time to create my own agent. I created a WeatherAgent that requires a zip code to be entered by the user (via the WeatherAgentCustomizer ) and then goes out and grabs temperature data from the Yahoo! Weather API/RSS feed. You can check it out via this link: []. [|Back to Top] InfoFilter and the Neural Network When looking at the InfoFilter application, my initial thought was to avoid using the NewsReaderAgent altogether because USENET news clients and servers are almost outdated (I only use Google Groups if I believe I can find useful info in the USENET newsgroups). However, after playing with the URLReaderAgent, I realized that it was going to take some work to pick out a bunch of pages in order to train the InfoFilter Neural Network (NN). So, I decided to push myself to consider the NewsReaderAgent. First problem, how do you even get access to a news server? For a variety of reasons, I don't have easy access to a news server. It's been several years since I did have access and even then, I recall that it was a bit cumbersome to get the connection going (account required, username needed, password needed, address of server required). Of course, I figured that there were probably a few benevolent geeks out there, so I Googled for free USENET news servers and sure enough I found them. I went to one of the first ones listed at []. It was freenews.netfront.net. I tried to use that address in the NewsReaderAgent, but had problems initially. So, to trouble shoot, I tried entering the address into Firefox as []. This brought up the Microsoft Mail email client (didn't even know I had it and it allowed me to connect to the news server and download a bunch of news headers. I then pulled up the comp.ai.neural-nets newsgroup as suggested in the book. I used Microsoft Mail and Google Groups just to get a feel for how the news groups looked from different "reader perspectives." Finally I got the NewsReaderAgent to work- I think when it didn't work, I was just using the default news server which I'm sure added access restriction since the course text was written. At first, I was a bit tentative about downloading the articles to NewsReaderAgent . But it was obvious that I had to download more. My first foray into scoring and rating articles just gave me 10 articles with mostly 0's and 1's for scores and corresponding ratings. After reading chapter 9 about how the application was supposed to work, I realized that this would not be enough data. The application will "score" articles based on how many times the keywords appear in the article. In order to train the NN, you need to set the "rating" of the articles according to your own personal tastes. Then, the NN can use your rating to provide proper weighting to each of the keywords. However, if you only have 0 or 1 occurrence of a keyword in a document, I didn't feel that I would be able to train the NN very well. So, I decided to get a lot of data. I noticed that the Google Group said there were 34,256 articles. That frightened me. Fortunately, I've had experience with newsgroups and I had a feeling that the Google Group had a large archive of articles, but USENET news servers usually don't keep so many records and they don't always keep records for very long. So, just to be safe, I used Microsoft Mail to see how many articles it thought was available. Microsoft Mail indicated that there was only a merciful 126 articles available. I checked the Microsoft Mail news readers settings and it was set to allow as many as 300 headers to be downloaded. Since it only saw 126, I figured that the NewsReaderAgent would see the same number at this particular USENET server (freenews.netfront.net). Later, I discovered that the NewsReaderAgent had a hardcoded limit of 100 articles (when you select All Articles), but with my experience in this sort of thing, I felt like it was better to be safe than sorry. So, I gave the NewsReaderAgent the command to download "All Articles". As predicted, there was a mercifully small number of articles (compared to 34,256), but a more reasonable sample for NN training purposes. As the articles continued to load, I could 0's and 1's, but now I was also seeing 4's, 7's, 9's and finally even higher numbers. Of course, scores like 17 came in for an article called, "call for papers" which I assumed would be quite useless as far as data and NN training. However, I did go in and improve the rating for articles like "Understanding the basics of Artificial Neural Networks." I also decreased the rating for the call for papers articles, but I did that against all reason. I had a personal feeling that those articles did not have info that I would be interested in, but I knew that the NN was only concerned about adding weights to the individual keywords that I added via the InfoFilterAgentCustomizer. Ultimately, it made much more sense to use the NewsReaderAgent rather than the URLReaderAgent to test the NN. This is because the news reader provided me with lots of articles with titles and content that I could easily use to train the NN. If I wanted to go with the URL reader, I would have had to select a bunch of web pages and I wasn't sure that I could do it as methodically and consistently as the news reader did. [|Back to Top] Using the Filters There is a linear sequence that is required (at least during the first round) for using the filters in the InfoFilter application. The process is basically described in the book, but not exactly in a procedural order. Here is the order that I recommend doing things in order to get the most benefit out of the filtering capabilities: You will only see the second two options if you have trained the NN Here are a few of the screen captures from the steps listed above: After pressing the Train NNs button, you can watch the progress of the training in the InfoFilter application main window Here are two of the training status messages: The Feedback filter in action: Note: A little bit of inconsistency between the menu which allows you to choose the filter and the results window: in the menu you see "using Feedback" and in the results window you see "using Predicted Ratings" [|Back to Top] Muster your Clusters I just wanted to add some notes and questions about clustering and clusters in the context of the K map used in this application. In the InfoFilter application, it is hard coded for 4 clusters to be used in the K map- two rows and two columns. I went back to read Chapter 5 about clustering, but it didn't make it clear to me about how elements were put in a particular cluster nor why the cluster map was arranged as a table- why not just a linked list or a stack? When you use the "using Clusters" filter on the news articles, the results you see are very interesting. First thing I noticed is that there are 5 types of ratings, but only 4 cluster values, so the clusters are not tied to the ratings. More interesting is that items with different ratings has the same cluster values. I believe the ratings values are holdovers from counting the number of occurrences of the keywords in the profile. The clusters must be based on the weighted values of keywords. One thing that is obvious from using the Clusters filter is that there is less granularity in the sorting. It looks like the "using Feedback" filter is much more desirable because every article can be quantitatively compared to the others, so the sorting is more accurate. The Cluster filter in action: [|Back to Top] Limitations of the neural network in the InfoFilter Application Here is a list of features I would add to the various applications ( InfoFilter, NewsReaderAgent and URLReaderAgent ) if I had the chance. It would have been nice if I had more granular control over the number of headers I wanted to download from USENET. Short of that, an article number to the left of each article would at least help to quantify the amount of data available. The browser part of the USENET reader would have been more useful if it could have converted HTML tags into actual HTML features (like links or tables or what have you). This would make the individual pages easier to read. For the URLReaderAgent, it would be nice to add some sort of web spider capability so that the program itself grabbed a bunch of data from some search engine so that the user doesn't have to do this manually. I was impressed by how quickly tne NN went through the training process. It seemed like it went pretty fast. [|Back to Top] References
 * [|Getting Started]
 * [|InfoFilter and the Neural Network]
 * [|Using the Filters]
 * [|Muster your Clusters]
 * [|Limits of the neural network in the InfoFilter Application]
 * [|References]
 * 1) Profile -> Customize...
 * 2) Create Profile
 * 3) Download all news articles
 * 4) Profile -> Add all articles
 * 5) Profile -> Customize...
 * 6) Train NNs
 * 7) Filter -> (at this point you can choose "using Keywords", "using Clusters", or "using Feedback"
 * If you choose to filter by Keywords, then the articles will be sorted in ascending order from the one that has the most occurrences of keywords to the one that has the least
 * If you choose to filter by Clusters, then the articles will be sorted by cluster
 * If you choose to filter by Feedback, then the articles will be sorted by the "back propagation neural network output unit activation"- try saying that three times fast! This BPNOU will be a value between 0.0 and 1.0
 * [|URLReaderAgentCustomizer.java]

= Intelligent Agents Assignment 4 =

Date: 4/27/08

Student: Michael Weingarden

Course ID: COMP575MultiAgent Systems

Semester/Year: Spring 2008

Title/Topic: Project 4, Marketplace

Try out every combination I couldn't think of how to get started with this one, so I finally decided to at least try every combination and see what the result was. Here are some screen captures from my efforts: Basic Buyer vs. Basic Seller Better Buyer vs. Basic Seller Best Buyer vs. Basic Seller Basic Buyer vs. Better Seller Basic Buyer vs. Best Seller Better Buyer vs. Better Seller Best Buyer vs. Best Seller Multiple Buyers vs. Basic Seller [|Back to Top] Summary of Results Basic Buyer vs. Basic Seller, $650 Better Buyer vs. Basic Seller, $575 Best Buyer vs. Basic Seller, $500 Basic Buyer vs. Better Seller, $575 Basic Buyer Vs. Best Seller, $575 Better Buyer vs. Better Seller, $470 Best Buyers vs. Best Seller, $470 Multiple Buyers vs. Basic Seller, $625 [|Back to Top] Summary of Results If you look at the results, you see some strange things. For example, the basic seller seemed to get more money out of the basic buyer than the best seller did! After I saw that result, I went back and tried it again, just to be safe and, sure enough, the best seller did much better and got $725 out of the buyer. I am including a screen capture of the new result below: What's interesting to me, though, is that sometimes the basic seller will do better than the best seller. Also, this leads to concerns about the pragmatism of this particular exercise and I'll discuss that more below. Ultimately, it looks like the fuzzy logic does better than the other type of negotiation, so it intrigues me and wants me to learn more about fuzzy logic. Unfortunately, I'm still not getting the gist of how fuzzy logic is different from probability or expert systems. This is my second exposure to fuzzy logic (the first being the Artificial Intelligence class), so I'm looking forward to the opportunity to study the nuts and bolts of fuzzy logic more in a more pragmatic application. [|Back to Top] Program Lockups Another interesting thing I observed is that the Java program frequently locked up. I'm not sure whether it was because of the Eclipse environment or not because I only tried running the app from Eclipse. However, I was always able to stop the application from running via the Eclipse console. Also, another oddity was that there were no error messages, the program just seemed to stop. One might believe that the program was just thinking and that is a possibility, however, it was odd that you could stop the app, go back and run it again and all of the transactions would be executed quickly. Since I did see different results when I ran the same scenario more than once, I'm wondering if sometimes the logic does bog the program down and other times it finds its way quite easily. Again, this would be interesting to investigate given more time. [|Back to Top] Pragmatic Considerations I like this particular application because I've had to be involved in a lot of negotiations in past business dealings. I've been on the buying end and selling end and I believe the negotiation process is a fine art. However, one of the main rules of negotiation is to never let the negotiation come down to one thing (usually price). When that happens, you have a standoff and somebody must lose. In order for a good negotiation to occur, both parties should feel like they won at some level. So, in that regard, the exercise is not ideal. On another level, I think it will be sometime before consumers or business owners will be comfortable allowing computers to negotiate for them. Even if they were shown that your agent could statistically achieve satisfactory results, there would always be the paranoia that the other party in the negotiation might have a better agent. That's why it's nice to have a person negotiating because they can sense when they're in too deep and call in a higher authority if necessary. Now my misgivings about this exercise don't mean that it shouldn't be pursued, but certainly anyone designing an agent based system like this will need to consider these factors and address them.
 * [|Try out every combination]
 * [|Summary of Results]
 * [|Program Lockups]
 * [|Pragmatic Considerations]
 * [|References]

= Intelligent Agents Assignment 5 =

Date: 5/3/08

Student: Michael Weingarden

Course ID: COMP575MultiAgent Systems

Semester/Year: Spring 2008

Title/Topic: Project 5, InfoFilter Part Deux

Experimental Web Pages When we first looked at InfoFilter, I was wondering what is it really doing and how is it making decisions. I'm a systems guy, so I decided to take a look at what happens from a systems perspective. So, I made up a few sample web pages to see what kind of results I get based on which keywords are used and how often they are used. Here are links to the sample pages: [|Back to Top] Summary of Results Some pretty interesting results I believe. First thing I did was to try to test (test page 1) and see what happened if I loaded one of my wiki pages with no keywords. My hope was that I would get a zero score and so I could forget about all the underlying HTML code and create additional test pages with no worries. This turned out as I had hoped. Then, I tried a test page (2) to see what the count was if I loaded all 10 keywords and I did get a score of 10. While performing the first two tests, I did discover that if the keyword is butted up against any HTML code, it does not get scored, so I had to keep this in mind when designing the rest of the test pages. Then, I tested (test page 3) to see what would happen if I just had one keyword a bunch of times and the score I got matched the expected result. So, I was now ready for some testing of the neural network. I did not bother with testing of the Cluster filter, for this exercise, I just focused on the NN or Feedback filter. In Test Page 4, I created a variety of test pages to put the NN through its paces. Test page 5 was created so that I could load a page with one keyword and then train the NN that I really liked that one keyword. In other words, the page came in rated with a score of 1 and "Not Very Useful." So, then I changed the user rating to "Interesting." Then, as a control, I loaded a page with just the word, "neural," and I left the user rating of "Not Very Useful." Next, I trained the NN and finally I loaded all the remaining test pages from page 7 to 13. The results were easily encapsulated in one snapshot which is located on test page 4, but I've also included it here for your convenience: It helps to look at Test Page 4 while viewing the image above. If you do so, you can see the following: Conclusion
 * [|Experimental Web Pages]
 * [|Summary of Results]
 * [|Test Page 1, control page, no keywords]
 * [|Test Page 2, all 10 default keywords listed once]
 * [|Test Page 3, fuzzy 7 times]
 * [|Test Page 4, a variety of scenarios with just fuzzy]
 * After training the NN with test page 5 (rated Interesting by me) and test page 6, InfoFilter ended up with a score of 0.90 for test page 5 (fuzzy) and 0.34 for test page 6 (neural).
 * Test page 7 which also had the word fuzzy listed once came in with the same score as test page 5 as I was hoping for.
 * Test page 8 which had the word neural also came in with the same score as test page 6, again, as expected.
 * Test page 9 which had fuzzy listed several times, came in with a score of 0.99 which seemed appropriate
 * **Test page 10- finally something interesting-** the page that had neural listed several times came in with a score of 0.02! Because the user rating was "Not Very Useful" during training, the NN decided that this page with neural listed many times was less useful than a page with neural listed only once!
 * Test page 11 had all the keywords listed and came in with a score of 0.78. After I saw that result, I was spurred on to create an additional page to see how a page would fare if it had all the keywords except my favorite, fuzzy.
 * Test page 12 had all keywords except fuzzy. This scored much lower (0.48) than the page with fuzzy just listed once. This spurred me on to see what would happen with neural removed.
 * Test page 13 had all keywords except neural and fuzzy. This score was 0.74 which was lower than the page with all keywords and lower than the page with just fuzzy listed once. However, the score did go up after neural was removed.

Although I turned this exercise into merely a test of the NN capability of InfoFilter, it has really intrigued me about how the guts work. At least now, I have the confidence to believe that training the NN works and I have some intelligent questions to ask as I look at Neural Networks in more detail during summer school. One big question is this: if a keyword is specified (like neural in my examples above), do we really want the user rating to be able to negatively influence the score for that word? On the one hand, if we take the time to assign it as a keyword, then shouldn't it only positively effect scores. On the other hand, it is kind of a neat mechanism for filtering out garbage that appears in searches (like when you're looking for info about the Eclipse IDE and all you keep finding is information about solar and lunar eclipses).