Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.Net: LLM invoked KernelFunction but final answer does not use result from function #5622

Closed
minhhungit opened this issue Mar 24, 2024 · 6 comments
Labels
.NET Issue or Pull requests regarding .NET code triage

Comments

@minhhungit
Copy link

minhhungit commented Mar 24, 2024

Describe the bug
I have a function, LLM invoked it, the function return value - the value is fixed = 10, but LLM does not use it, LLM make it up

To Reproduce
Code to reproduce the behavior:

using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using System.ComponentModel;
using System.Text;

namespace SerenityFrameworkConsultant
{
    public class MyMathPlugin
    {
        [KernelFunction, Description("Get math result")]
        public string MathCalculator(string math) => "10";
    }

    public class MathAsking
    {
        public static async Task DoSomething()
        {
            string apikey = Environment.GetEnvironmentVariable("PRIVATE_OPENAI_KEY", EnvironmentVariableTarget.User)!;

            // Initialize the kernel
            IKernelBuilder kb = Kernel.CreateBuilder();

            kb.AddOpenAIChatCompletion("gpt-3.5-turbo-0125", apikey);
            kb.Services.AddLogging(c => c.AddConsole().SetMinimumLevel(LogLevel.Trace));

            kb.Plugins.AddFromType<MyMathPlugin>();

            Kernel kernel = kb.Build();

            // Create a new chat
            var ai = kernel.GetRequiredService<IChatCompletionService>();
            ChatHistory chat = new();
            StringBuilder builder = new();
            OpenAIPromptExecutionSettings settings = new() { ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions };

            while (true)
            {
                Console.Write("Question: ");
                var question = Console.ReadLine();

                ChatMessageContent chatResult = await ai.GetChatMessageContentAsync(question, settings, kernel);

                Console.Write($"\n>>> Result: {chatResult}\n\n> ");
            }
        }
    }
}

Expected behavior
LLM should use anwser result that function provided

Screenshots
image

Platform

  • OS: Windows
  • IDE: Visual Studio
  • Language: C#
  • Source: Microsoft.SemanticKernel Version="1.6.2"
@markwallace-microsoft markwallace-microsoft added .NET Issue or Pull requests regarding .NET code triage labels Mar 24, 2024
@github-actions github-actions bot changed the title LLM invoked KernelFunction but final answer does not use result from function .Net: LLM invoked KernelFunction but final answer does not use result from function Mar 24, 2024
@markwallace-microsoft
Copy link
Member

What's happening in this case is as follows:

  1. Request is sent to the LLM with the user ask and list of functions
  2. LLM decided it need to call one of the functions
  3. Function is called and response is sent back to the LLM
  4. LLM then decides how to respond to the user ask using it's knowledge and the return value from the function

You are expecting the function call result to be returned but this is not guaranteed because the LLM decides on the final response.

We are investigating adding the the ability to allow developer to decide whether or not the LLM will be called after a function call to allow the return value to be used as the final answer.

@markwallace-microsoft
Copy link
Member

Tracking via this request #5436 (comment)

@minhhungit
Copy link
Author

@markwallace-microsoft thanks for your explain, but it should have a way to put prompt to ask LLM force answer base on result from plugin right? I think its behavior just depend on our prompt

@minhhungit
Copy link
Author

it's not make sense when LLM decide to use function to get solution and after that it changes its mind and want to use their solution. I think we can ask it use result from function as source of truth.

@waghmare-omkar
Copy link

What's happening in this case is as follows:

  1. Request is sent to the LLM with the user ask and list of functions
  2. LLM decided it need to call one of the functions
  3. Function is called and response is sent back to the LLM
  4. LLM then decides how to respond to the user ask using it's knowledge and the return value from the function

You are expecting the function call result to be returned but this is not guaranteed because the LLM decides on the final response.

We are investigating adding the the ability to allow developer to decide whether or not the LLM will be called after a function call to allow the return value to be used as the final answer.

Hi Mark,

Has this issue been resolved?
I have a function that does some action. but never really returns any data to LLM. However, LLM still responds back with some text response post function call. Can I avoid that?
See my code below:

OpenAIPromptExecutionSettings openAIPromptExecutionSettings = new()
{
ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions
};
var chatCompletionService = kernel.GetRequiredService();
var response = await chatCompletionService.GetChatMessageContentAsync(
history,
executionSettings: openAIPromptExecutionSettings,
kernel: kernel
);

This runs a function lets say, "Turn_on_lights".
So this is the behavior I see,

  1. Turn_on_lights function called
  2. LLM responds back with some text that says, "Turned on the lights".

How do I avoid that the reply in step 2?

@pranjal-joshi
Copy link

pranjal-joshi commented Dec 24, 2024

What's happening in this case is as follows:

  1. Request is sent to the LLM with the user ask and list of functions
  2. LLM decided it need to call one of the functions
  3. Function is called and response is sent back to the LLM
  4. LLM then decides how to respond to the user ask using it's knowledge and the return value from the function

You are expecting the function call result to be returned but this is not guaranteed because the LLM decides on the final response.

We are investigating adding the the ability to allow developer to decide whether or not the LLM will be called after a function call to allow the return value to be used as the final answer.

@markwallace-microsoft

Context:
My function invoked using Semantic Kernel is supposed to return me a piece of DataTable based on the user query. The size of this table can be of few thousand rows and currently it is returned as Adaptive Card using Microsoft Bot frameworks.

Issue
The semantic kernel is trying to summarise/post-process the value returned by my function that causes significant delays.

Is there any way to disable passing the invoked function response back to LLM?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
.NET Issue or Pull requests regarding .NET code triage
Projects
None yet
Development

No branches or pull requests

4 participants