Replies: 2 comments
-
If it could ignore some information by will and ony add to its memory if it decided to then that would enable itterative going over notes without forgetting the task it thinks of leading to the ability to itteratively make progress upon past progress. But its super dumb in the sense that even trash text like "ahfdsuhbausdfsuzdhahbfuzsahfuzsdhfuzasdfhuz" totally and completely clog up its working memory, i have no idea how the biggest companys in the world did not think of that exept that it is about luck and biology of motivation and not about true intelligence/diligent work/talent/creativty but just about copy pasting what other people figured out and patent keeping on top. Altho i would clearly say in the case of OpenAI i think it is that they have no many people and to know who to listen to is hard so bascially you need to be a chef to know who is a good check without going by superficial things, a tax accountant to know who is giving good tax advice and a genious AI super creative awesome person to know who to listen to or have massively good epistemology of spoting good arguments and thinking about ideas in a non supericially biased why where you basically discount everything a non phd sais |
Beta Was this translation helpful? Give feedback.
-
Hey @PriNova any change to have a chat? |
Beta Was this translation helpful? Give feedback.
-
In times of LLM hype, there is often too much focus on one topic, which makes other topics lose their relevance or move into the background.
Often LLMs are given tasks for which there are already existing AI systems that were created specifically for these tasks.
Here I would like to bring one of these systems back into the foreground to consider a possible fusion of this architecture and LLM such as GPT-4.
Abstract:
Discussion in the scientific community has revolved around the potential fusion of Soar, a cognitive architecture designed to build general-purpose intelligent agents, and Large Language Models (LLMs) such as GPT-4, which are deep learning models trained on large amounts of textual data. The goal of this fusion is to leverage the strengths of both systems to create a more robust and versatile AI.
Soar's strengths lie in its structured knowledge representation, its problem solving and planning capabilities, and its ability to learn from experience. It uses symbolic, relational structures to represent knowledge, which can be more interpretable and transparent than the distributed representations used by LLMs. Soar is designed to support complex problem solving and planning tasks, and it has mechanisms to learn from its experience and improve its performance over time.
On the other hand, LLMs excel at processing and generating natural language. They have been trained on a large corpus of text data and can generate coherent and contextually relevant sentences by predicting the next word in a sequence. However, they lack an explicit understanding of the content they generate.
The proposed fusion of Soar and LLMs would involve using the LLM as a "meta-programmer" for Soar, generating rules for Soar based on the input it receives. This could potentially allow for more dynamic and adaptive behavior in Soar. In addition, LLMs could be used to support Soar's memory architecture, processing and interpreting the contents of Soar's memory, generating new content for the memory, and helping to manage the memory.
Advantages:
Disadvantages:
Conclusion:
The fusion of Soar and LLMs provides an exciting new frontier in AI research. While there are certainly challenges to address, such a system could potentially combine the strengths of both approaches, resulting in a more robust and versatile AI. However, careful design, implementation, and testing will be critical to the success of this endeavor. We invite further discussion and experimentation around this concept in the scientific community.
Beta Was this translation helpful? Give feedback.
All reactions