这一节我们主要理解的逻辑为:
- 代理对象的创建流程
- 代理对象的方法执行流程
代理对象的创建流程
创建代理对象是通过AiServices.create(Coder.JavaCoder.class, model)
进行的,由于AiServices是一个抽象类,源码中有一个默认的子类DefaultAiServices
,核心实现源码都在DefaultAiServices
中。
DefaultAiServices的build
方法就是用来创建指定接口的代理对象:
public T build() {// 验证是否配置了ChatLanguageModelthis.performBasicValidation();//检查当前上下文是否有配置聊天记忆功能,确保当服务需要聊天记忆功能时,必须正确配置相关的记忆提供者if (!this.context.hasChatMemory() && ChatMemoryAccess.class.isAssignableFrom(this.context.aiServiceClass)) {throw IllegalConfigurationException.illegalConfiguration("In order to have a service implementing ChatMemoryAccess, please configure the ChatMemoryProvider on the '%s'.", new Object[]{this.context.aiServiceClass.getName()});} else {//获取AI服务类的所有方法,然后依次遍历Method[] var1 = this.context.aiServiceClass.getMethods();int var2 = var1.length;for(int var3 = 0; var3 < var2; ++var3) {// 验证接口中是否有方法上加了@Moderate,但是又没有配置ModerationModelMethod method = var1[var3];if (method.isAnnotationPresent(Moderate.class) && this.context.moderationModel == null) {throw IllegalConfigurationException.illegalConfiguration("The @Moderate annotation is present, but the moderationModel is not set up. Please ensure a valid moderationModel is configured before using the @Moderate annotation.");}if (method.getReturnType() == Result.class || method.getReturnType() == List.class || method.getReturnType() == Set.class) {TypeUtils.validateReturnTypesAreProperlyParametrized(method.getName(), method.getGenericReturnType());}if (!this.context.hasChatMemory()) {Parameter[] var5 = method.getParameters();int var6 = var5.length;for(int var7 = 0; var7 < var6; ++var7) {Parameter parameter = var5[var7];if (parameter.isAnnotationPresent(MemoryId.class)) {throw IllegalConfigurationException.illegalConfiguration("In order to use @MemoryId, please configure the ChatMemoryProvider on the '%s'.", new Object[]{this.context.aiServiceClass.getName()});}}}}//基于JDK动态代理创建代理对象Object proxyInstance = Proxy.newProxyInstance(this.context.aiServiceClass.getClassLoader(), new Class[]{this.context.aiServiceClass}, new InvocationHandler() {// 使用缓存线程池执行异步任务private final ExecutorService executor = Executors.newCachedThreadPool();public Object invoke(Object proxy, Method method, Object[] args) throws Exception {// 处理Object类的方法if (method.getDeclaringClass() == Object.class) {return method.invoke(this, args);// 处理ChatMemoryAccess接口的方法} else if (method.getDeclaringClass() == ChatMemoryAccess.class) {Object var10000;switch (method.getName()) {case "getChatMemory" -> var10000 = DefaultAiServices.this.context.chatMemoryService.getChatMemory(args[0]);case "evictChatMemory" -> var10000 = DefaultAiServices.this.context.chatMemoryService.evictChatMemory(args[0]) != null;default -> throw new UnsupportedOperationException("Unknown method on ChatMemoryAccess class : " + method.getName());}return var10000;} else {// 验证方法参数DefaultAiServices.validateParameters(method);// 查找memoryId,默认为"default"Object memoryId = DefaultAiServices.findMemoryId(method, args).orElse("default");// 获取或创建聊天记忆ChatMemory chatMemory = DefaultAiServices.this.context.hasChatMemory() ? DefaultAiServices.this.context.chatMemoryService.getOrCreateChatMemory(memoryId) : null;// 准备系统消息Optional<SystemMessage> systemMessage = DefaultAiServices.this.prepareSystemMessage(memoryId, method, args);// 准备用户消息dev.langchain4j.data.message.UserMessage userMessage = DefaultAiServices.prepareUserMessage(method, args);// 增强结果(用于检索增强)AugmentationResult augmentationResult = null;if (DefaultAiServices.this.context.retrievalAugmentor != null) {List<ChatMessage> chatMemoryMessages = chatMemory != null ? chatMemory.messages() : null;Metadata metadata = Metadata.from(userMessage, memoryId, chatMemoryMessages);AugmentationRequest augmentationRequest = new AugmentationRequest(userMessage, metadata);augmentationResult = DefaultAiServices.this.context.retrievalAugmentor.augment(augmentationRequest);userMessage = (dev.langchain4j.data.message.UserMessage)augmentationResult.chatMessage();}// 获取返回类型并检查是否为流式响应Type returnType = method.getGenericReturnType();boolean streaming = returnType == TokenStream.class || this.canAdaptTokenStreamTo(returnType);boolean supportsJsonSchema = this.supportsJsonSchema();// 检查是否支持JSON SchemaOptional<JsonSchema> jsonSchema = Optional.empty();if (supportsJsonSchema && !streaming) {jsonSchema = DefaultAiServices.this.serviceOutputParser.jsonSchema(returnType);}// 如果不支持JSON Schema或不是流式响应,则添加输出格式指令if ((!supportsJsonSchema || jsonSchema.isEmpty()) && !streaming) {userMessage = this.appendOutputFormatInstructions(returnType, userMessage);}Object messages;if (chatMemory != null) {Objects.requireNonNull(chatMemory);systemMessage.ifPresent(chatMemory::add);chatMemory.add(userMessage);messages = chatMemory.messages();} else {// 构建消息列表messages = new ArrayList();Objects.requireNonNull(messages);systemMessage.ifPresent(messages::add);((List)messages).add(userMessage);}// 触发内容审核(如果方法有@Moderate注解)Future<Moderation> moderationFuture = this.triggerModerationIfNeeded(method, (List)messages);// 准备工具执行上下文ToolExecutionContext toolExecutionContext = DefaultAiServices.this.context.toolService.executionContext(memoryId, userMessage);// 处理流式响应if (streaming) {TokenStream tokenStream = new AiServiceTokenStream(AiServiceTokenStreamParameters.builder().messages((List)messages).toolSpecifications(toolExecutionContext.toolSpecifications()).toolExecutors(toolExecutionContext.toolExecutors()).retrievedContents(augmentationResult != null ? augmentationResult.contents() : null).context(DefaultAiServices.this.context).memoryId(memoryId).build());return returnType == TokenStream.class ? tokenStream : this.adapt(tokenStream, returnType);} else {// 处理非流式响应ResponseFormat responseFormat = null;if (supportsJsonSchema && jsonSchema.isPresent()) {responseFormat = ResponseFormat.builder().type(ResponseFormatType.JSON).jsonSchema((JsonSchema)jsonSchema.get()).build();}// 构建聊天请求ChatRequestParameters parameters = ChatRequestParameters.builder().toolSpecifications(toolExecutionContext.toolSpecifications()).responseFormat(responseFormat).build();ChatRequest chatRequest = ChatRequest.builder().messages((List)messages).parameters(parameters).build();// 执行聊天请求ChatResponse chatResponse = DefaultAiServices.this.context.chatModel.chat(chatRequest);// 验证内容审核结果 AiServices.verifyModerationIfNeeded(moderationFuture);// 执行工具调用循环ToolExecutionResult toolExecutionResult = DefaultAiServices.this.context.toolService.executeInferenceAndToolsLoop(chatResponse, parameters, (List)messages, DefaultAiServices.this.context.chatModel, chatMemory, memoryId, toolExecutionContext.toolExecutors());chatResponse = toolExecutionResult.chatResponse();FinishReason finishReason = chatResponse.metadata().finishReason();// 构建响应Response<AiMessage> response = Response.from(chatResponse.aiMessage(), toolExecutionResult.tokenUsageAccumulator(), finishReason);// 解析响应Object parsedResponse = DefaultAiServices.this.serviceOutputParser.parse(response, returnType);// 如果返回类型是Result,则构建Result对象return TypeUtils.typeHasRawClass(returnType, Result.class) ? Result.builder().content(parsedResponse).tokenUsage(toolExecutionResult.tokenUsageAccumulator()).sources(augmentationResult == null ? null : augmentationResult.contents()).finishReason(finishReason).toolExecutions(toolExecutionResult.toolExecutions()).build() : parsedResponse;}}}// 检查是否可以适配TokenStream到指定类型private boolean canAdaptTokenStreamTo(Type returnType) {Iterator var2 = DefaultAiServices.this.tokenStreamAdapters.iterator();TokenStreamAdapter tokenStreamAdapter;do {if (!var2.hasNext()) {return false;}tokenStreamAdapter = (TokenStreamAdapter)var2.next();} while(!tokenStreamAdapter.canAdaptTokenStreamTo(returnType));return true;}// 适配TokenStream到指定类型private Object adapt(TokenStream tokenStream, Type returnType) {Iterator var3 = DefaultAiServices.this.tokenStreamAdapters.iterator();TokenStreamAdapter tokenStreamAdapter;do {if (!var3.hasNext()) {throw new IllegalStateException("Can't find suitable TokenStreamAdapter");}tokenStreamAdapter = (TokenStreamAdapter)var3.next();} while(!tokenStreamAdapter.canAdaptTokenStreamTo(returnType));return tokenStreamAdapter.adapt(tokenStream);}// 检查是否支持JSON Schemaprivate boolean supportsJsonSchema() {return DefaultAiServices.this.context.chatModel != null && DefaultAiServices.this.context.chatModel.supportedCapabilities().contains(Capability.RESPONSE_FORMAT_JSON_SCHEMA);}// 添加输出格式指令到用户消息private dev.langchain4j.data.message.UserMessage appendOutputFormatInstructions(Type returnType, dev.langchain4j.data.message.UserMessage userMessage) {String outputFormatInstructions = DefaultAiServices.this.serviceOutputParser.outputFormatInstructions(returnType);String var10000 = userMessage.singleText();String text = var10000 + outputFormatInstructions;if (Utils.isNotNullOrBlank(userMessage.name())) {userMessage = dev.langchain4j.data.message.UserMessage.from(userMessage.name(), text);} else {userMessage = dev.langchain4j.data.message.UserMessage.from(text);}return userMessage;}// 触发内容审核(如果需要)private Future<Moderation> triggerModerationIfNeeded(Method method, List<ChatMessage> messages) {return method.isAnnotationPresent(Moderate.class) ? this.executor.submit(() -> {List<ChatMessage> messagesToModerate = AiServices.removeToolMessages(messages);return (Moderation)DefaultAiServices.this.context.moderationModel.moderate(messagesToModerate).content();}) : null;}});return proxyInstance;}}
可以发现,其实就是用的JDK动态代理机制创建的代理对象,只不过在创建代理对象之前有三步验证:
- 验证是否配置了ChatLanguageModel:这一步不难理解,如果代理对象没有配置ChatLanguageModel,那就利用不上大模型的能力了
- 验证接口中是否有方法上加了@Moderate,但是又没有配置ModerationModel
- 验证接口提供了@MemoryId 但是没提供ChatMemoryProvider
@Moderate和ModerationModel
Moderate是温和的意思,这是一种安全机制,比如
@SystemMessage("请扮演一名资深的JAVA专家,根据输入的任务要求写一段高效可维护可扩展的代码")
@Moderate
String code(String task);
我们在code()方法上加了@Moderate注解,那么当调用code()方法时,会调用两次大模型:
- 首先是配置的ModerationModel,如果没有配置则创建代理对象都不会成功,ModerationModel会对方法的输入进行审核,看是否涉及敏感、不安全的内容。
- 然后才是配置的ChatLanguageModel
配置ModerationModel的方式如下:
OpenAiChatModel model = OpenAiChatModel.builder().apiKey(apiKey).httpClientBuilder(new SpringRestClientBuilder()).modelName("gpt-4o-mini").build();
ModerationModel moderationModel = OpenAiModerationModel.builder().apiKey(apiKey).httpClientBuilder(new SpringRestClientBuilder()).modelName("gpt-4o-mini").build();interface JavaCoder extends Coder {@SystemMessage("请扮演一名资深的JAVA专家,根据输入的任务要求写一段高效可维护可扩展的代码")@ModerateString code(String task);static JavaCoder create() {return AiServices.builder(JavaCoder.class).chatLanguageModel(model).moderationModel(moderationModel).build();}
}
虽然OpenAiChatModel 和OpenAiModerationModel都是OpenAi,但是你可以理解为OpenAiModerationModel在安全方面更近专业。
代理对象的方法执行流程
代理对象创建出来之后,就可以指定代理对象的方法了,而一旦执行代理对象的方法就是进入到上述源码中InvocationHandler的invoke()方法,而这个invoke()方法是LangChain4j中的最为重要的,里面涉及的组件、功能是非常多的,而本节我们只关心是怎么解析@SystemMessage得到系统提示词,然后组合用户输入的内容,最后发送给大模型得到响应结果的。
在invoke()方法的源码中有这么两行代码
// 准备系统消息
Optional<SystemMessage> systemMessage = DefaultAiServices.this.prepareSystemMessage(memoryId, method, args);// 准备用户消息
dev.langchain4j.data.message.UserMessage userMessage = DefaultAiServices.prepareUserMessage(method, args);
分别调用了prepareSystemMessage()和prepareUserMessage()两个方法,而入参都是代理对象当前正在执行的方法和参数。
在看prepareSystemMessage()方法之前,我们需要再了解一个跟@SystemMessage有关的功能,前面我们是这么定义SystemMessage的:
@SystemMessage("请扮演一名资深的JAVA专家,根据输入的任务要求写一段高效可维护可扩展的代码")
其中JAVA专家是固定的,但是作为一名AI程序员,不可能只会Java,而这个应该都用户来指定,也就是说JAVA专家应该得是个变量,那么我们可以这么做:
@SystemMessage("请扮演一名资深的{{userRole}}专家,根据输入的任务要求写一段高效可维护可扩展的代码")
@UserMessage("{{task}}")
String code(@V("task") String task, @V("userRole") String userRole);
其中{task}就是变量,该变量的值由用户在调用code方法时指定,注意由于code()有两个参数了,需要在task参数前面定义@UserMessage,表示task是用户消息 或者用@V注解 @UserMessage注解引入
知道了这个场景,我们再来看prepareSystemMessage()方法的实现:
private Optional<SystemMessage> prepareSystemMessage(Object memoryId, Method method, Object[] args) {return this.findSystemMessageTemplate(memoryId, method).map((systemMessageTemplate) -> {return PromptTemplate.from(systemMessageTemplate).apply(findTemplateVariables(systemMessageTemplate, method, args)).toSystemMessage();});}private Optional<String> findSystemMessageTemplate(Object memoryId, Method method) {//获取方法上的SystemMessage注解dev.langchain4j.service.SystemMessage annotation = (dev.langchain4j.service.SystemMessage)method.getAnnotation(dev.langchain4j.service.SystemMessage.class);//内容模板也可以从配置文件获得例如 @UserMessage(fromResource = "my-prompt-template.txt")return annotation != null ? Optional.of(getTemplate(method, "System", annotation.fromResource(), annotation.value(), annotation.delimiter())) : (Optional)this.context.systemMessageProvider.apply(memoryId);}private static Map<String, Object> findTemplateVariables(String template, Method method, Object[] args) {// 得到当前正在执行的方法参数Parameter[] parameters = method.getParameters();Map<String, Object> variables = new HashMap();for(int i = 0; i < parameters.length; ++i) {String variableName = getVariableName(parameters[i]);Object variableValue = args[i];variables.put(variableName, variableValue);}if (template.contains("{{it}}") && !variables.containsKey("it")) {String itValue = getValueOfVariableIt(parameters, args);variables.put("it", itValue);}return variables;}private static String getVariableName(Parameter parameter) {V annotation = (V)parameter.getAnnotation(V.class);return annotation != null ? annotation.value() : parameter.getName();}private static String getValueOfVariableIt(Parameter[] parameters, Object[] args) {if (parameters.length == 1) {Parameter parameter = parameters[0];//解析方法上的@UserMessage注解、@MemoryId注解、@UserName注解if (!parameter.isAnnotationPresent(MemoryId.class) && !parameter.isAnnotationPresent(UserMessage.class) && !parameter.isAnnotationPresent(UserName.class) && (!parameter.isAnnotationPresent(V.class) || isAnnotatedWithIt(parameter))) {return toString(args[0]);}}for(int i = 0; i < parameters.length; ++i) {if (isAnnotatedWithIt(parameters[i])) {return toString(args[i]);}}throw IllegalConfigurationException.illegalConfiguration("Error: cannot find the value of the prompt template variable \"{{it}}\".");}//解析方法上的@V注解,值为itprivate static boolean isAnnotatedWithIt(Parameter parameter) {V annotation = (V)parameter.getAnnotation(V.class);return annotation != null && "it".equals(annotation.value());}
从源码看出@SystemMessage注解的value属性是一个String[]:
//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by FernFlower decompiler)
//package dev.langchain4j.service;import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;@Target({ElementType.TYPE, ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
public @interface SystemMessage {String[] value() default {""};String delimiter() default "\n";String fromResource() default "";
}
表示如果系统提示词比较长,可以写成多个String,不过最后会使用delimiter的值将这多个String拼接为一个SystemMessage,并且在拼接完以后会根据@V的值填充SystemMessage中的变量,从而得到最终的SystemMessage。
再来看prepareUserMessage()方法,本节我们只关心:
/*** 准备用户消息(UserMessage)对象* @param method 调用的方法* @param args 方法参数* @return 构建好的UserMessage对象*/
private static dev.langchain4j.data.message.UserMessage prepareUserMessage(Method method, Object[] args) {// 1. 获取用户消息模板String template = getUserMessageTemplate(method, args);// 2. 从方法和参数中提取模板变量Map<String, Object> variables = findTemplateVariables(template, method, args);// 3. 应用模板变量生成Prompt对象Prompt prompt = PromptTemplate.from(template).apply(variables);// 4. 查找是否有@UserName注解标记的用户名Optional<String> maybeUserName = findUserName(method.getParameters(), args);// 5. 如果有用户名,创建带用户名的UserMessage;否则使用默认方式创建Optional var10000 = maybeUserName.map((userName) -> {return dev.langchain4j.data.message.UserMessage.from(userName, prompt.text());});Objects.requireNonNull(prompt);return (dev.langchain4j.data.message.UserMessage)var10000.orElseGet(prompt::toUserMessage);}/*** 获取用户消息模板内容* 优先级:方法注解 > 参数注解 > 唯一参数值*/private static String getUserMessageTemplate(Method method, Object[] args) {// 1. 从方法上的@UserMessage注解获取模板Optional<String> templateFromMethodAnnotation = findUserMessageTemplateFromMethodAnnotation(method);// 2. 从参数上的@UserMessage注解获取模板Optional<String> templateFromParameterAnnotation = findUserMessageTemplateFromAnnotatedParameter(method.getParameters(), args);// 3. 检查模板来源冲突if (templateFromMethodAnnotation.isPresent() && templateFromParameterAnnotation.isPresent()) {throw IllegalConfigurationException.illegalConfiguration("Error: The method '%s' has multiple @UserMessage annotations. Please use only one.", new Object[]{method.getName()});// 4. 优先使用方法注解模板} else if (templateFromMethodAnnotation.isPresent()) {return (String)templateFromMethodAnnotation.get();// 5. 其次使用参数注解模板} else if (templateFromParameterAnnotation.isPresent()) {return (String)templateFromParameterAnnotation.get();// 6. 最后尝试从唯一参数获取} else {Optional<String> templateFromTheOnlyArgument = findUserMessageTemplateFromTheOnlyArgument(method.getParameters(), args);if (templateFromTheOnlyArgument.isPresent()) {return (String)templateFromTheOnlyArgument.get();} else {throw IllegalConfigurationException.illegalConfiguration("Error: The method '%s' does not have a user message defined.", new Object[]{method.getName()});}}}/*** 从方法注解中提取用户消息模板*/private static Optional<String> findUserMessageTemplateFromMethodAnnotation(Method method) {return Optional.ofNullable((UserMessage)method.getAnnotation(UserMessage.class)).map((a) -> {// 处理模板资源文件或直接值return getTemplate(method, "User", a.fromResource(), a.value(), a.delimiter());});}/*** 从参数注解中提取用户消息模板*/private static Optional<String> findUserMessageTemplateFromAnnotatedParameter(Parameter[] parameters, Object[] args) {for(int i = 0; i < parameters.length; ++i) {// 检查参数是否有@UserMessage注解if (parameters[i].isAnnotationPresent(UserMessage.class)) {// 直接使用参数值作为模板return Optional.of(toString(args[i]));}}return Optional.empty();}/*** 当没有明确注解时,尝试从唯一参数获取模板*/private static Optional<String> findUserMessageTemplateFromTheOnlyArgument(Parameter[] parameters, Object[] args) {// 仅当有且只有一个无注解参数时生效return parameters != null && parameters.length == 1 && parameters[0].getAnnotations().length == 0 ? Optional.of(toString(args[0])) : Optional.empty();}/*** 查找被@UserName注解标记的参数值*/private static Optional<String> findUserName(Parameter[] parameters, Object[] args) {for(int i = 0; i < parameters.length; ++i) {if (parameters[i].isAnnotationPresent(UserName.class)) {return Optional.of(args[i].toString());}}return Optional.empty();}
还是比较简单的:
- 如果有多个参数,获取加了@UserMessage注解参数的值作为UserMessage
- 如果只有一个参数,则直接使用该参数值作为UserMessage
- 方法上添加了@UserMessage 提取消息模板
这样就得到了最终的SystemMessage和UserMessage,那么如何将他们组装在一起呢?
Object messages;if (chatMemory != null) {Objects.requireNonNull(chatMemory);systemMessage.ifPresent(chatMemory::add);chatMemory.add(userMessage);messages = chatMemory.messages();} else {messages = new ArrayList();Objects.requireNonNull(messages);systemMessage.ifPresent(messages::add);((List)messages).add(userMessage);}
我们还没有设置ChatMemory,所以组装的逻辑其实就是按顺序将SystemMessage和UserMessage添加到一个List中,后续只要将这个List传入给ChatLanguageModel的chat()方法就可以了。
那么ChatLanguageModel的chat()方法是如何处理List的呢?会拼接为一个字符串吗?并不会,我们看看OpenAiChatModel的实现:
首先 看日志是这样的
DEBUG: Writing [{"model" : "gpt-4o-mini","messages" : [ {"role" : "system","content" : "请扮演一名资深的java专家,根据输入的任务要求写一段高效可维护可扩展的代码"}, {"role" : "user","content" : "写一个冒泡排序"} ],"stream" : false
}] as "application/json" with org.springframework.http.converter.StringHttpMessageConverter
public static List<Message> toOpenAiMessages(List<ChatMessage> messages) {return (List)messages.stream().map(InternalOpenAiHelper::toOpenAiMessage).collect(Collectors.toList());}public static Message toOpenAiMessage(ChatMessage message) {if (message instanceof SystemMessage) {return dev.langchain4j.model.openai.internal.chat.SystemMessage.from(((SystemMessage)message).text());} else if (message instanceof UserMessage) {UserMessage userMessage = (UserMessage)message;return userMessage.hasSingleText() ? dev.langchain4j.model.openai.internal.chat.UserMessage.builder().content(userMessage.singleText()).name(userMessage.name()).build() : dev.langchain4j.model.openai.internal.chat.UserMessage.builder().content((List)userMessage.contents().stream().map(InternalOpenAiHelper::toOpenAiContent).collect(Collectors.toList())).name(userMessage.name()).build();} else if (message instanceof AiMessage) {AiMessage aiMessage = (AiMessage)message;if (!aiMessage.hasToolExecutionRequests()) {return AssistantMessage.from(aiMessage.text());} else {ToolExecutionRequest toolExecutionRequest = (ToolExecutionRequest)aiMessage.toolExecutionRequests().get(0);if (toolExecutionRequest.id() == null) {FunctionCall functionCall = FunctionCall.builder().name(toolExecutionRequest.name()).arguments(toolExecutionRequest.arguments()).build();return AssistantMessage.builder().functionCall(functionCall).build();} else {List<ToolCall> toolCalls = (List)aiMessage.toolExecutionRequests().stream().map((it) -> {return ToolCall.builder().id(it.id()).type(ToolType.FUNCTION).function(FunctionCall.builder().name(it.name()).arguments(it.arguments()).build()).build();}).collect(Collectors.toList());return AssistantMessage.builder().content(aiMessage.text()).toolCalls(toolCalls).build();}}} else if (message instanceof ToolExecutionResultMessage) {ToolExecutionResultMessage toolExecutionResultMessage = (ToolExecutionResultMessage)message;return (Message)(toolExecutionResultMessage.id() == null ? FunctionMessage.from(toolExecutionResultMessage.toolName(), toolExecutionResultMessage.text()) : ToolMessage.from(toolExecutionResultMessage.id(), toolExecutionResultMessage.text()));} else {throw Exceptions.illegalArgument("Unknown message type: " + String.valueOf(message.type()), new Object[0]);}}private static Content toOpenAiContent(dev.langchain4j.data.message.Content content) {if (content instanceof TextContent) {return toOpenAiContent((TextContent)content);} else if (content instanceof ImageContent) {return toOpenAiContent((ImageContent)content);} else if (content instanceof AudioContent) {AudioContent audioContent = (AudioContent)content;return toOpenAiContent(audioContent);} else {throw Exceptions.illegalArgument("Unknown content type: " + String.valueOf(content), new Object[0]);}}private static Content toOpenAiContent(TextContent content) {return Content.builder().type(ContentType.TEXT).text(content.text()).build();}private static Content toOpenAiContent(ImageContent content) {return Content.builder().type(ContentType.IMAGE_URL).imageUrl(ImageUrl.builder().url(toUrl(content.image())).detail(toDetail(content.detailLevel())).build()).build();}private static Content toOpenAiContent(AudioContent audioContent) {return Content.builder().type(ContentType.AUDIO).inputAudio(InputAudio.builder().data(ValidationUtils.ensureNotBlank(audioContent.audio().base64Data(), "audio.base64Data")).format(extractSubtype(ValidationUtils.ensureNotBlank(audioContent.audio().mimeType(), "audio.mimeType"))).build()).build();}
在正式调用OpenAi的接口之前,OpenAiChatModel会利用toOpenAiMessages来处理List,首先包装成一个ChatRequest然后调用OpenAiChatModel的doChat方法
public ChatResponse doChat(ChatRequest chatRequest) {OpenAiChatRequestParameters parameters = (OpenAiChatRequestParameters)chatRequest.parameters();InternalOpenAiHelper.validate(parameters);ChatCompletionRequest openAiRequest = InternalOpenAiHelper.toOpenAiChatRequest(chatRequest, parameters, this.strictTools, this.strictJsonSchema).build();ChatCompletionResponse openAiResponse = (ChatCompletionResponse)RetryUtils.withRetryMappingExceptions(() -> {return (ChatCompletionResponse)this.client.chatCompletion(openAiRequest).execute();}, this.maxRetries);OpenAiChatResponseMetadata responseMetadata = ((OpenAiChatResponseMetadata.Builder)((OpenAiChatResponseMetadata.Builder)((OpenAiChatResponseMetadata.Builder)((OpenAiChatResponseMetadata.Builder)OpenAiChatResponseMetadata.builder().id(openAiResponse.id())).modelName(openAiResponse.model())).tokenUsage(InternalOpenAiHelper.tokenUsageFrom(openAiResponse.usage()))).finishReason(InternalOpenAiHelper.finishReasonFrom(((ChatCompletionChoice)openAiResponse.choices().get(0)).finishReason()))).created(openAiResponse.created()).serviceTier(openAiResponse.serviceTier()).systemFingerprint(openAiResponse.systemFingerprint()).build();return ChatResponse.builder().aiMessage(InternalOpenAiHelper.aiMessageFrom(openAiResponse)).metadata(responseMetadata).build();}
这段代码里面利用建造者模式构建了一个ChatCompletionRequest 包含了模型、还有一个 List messages;
关键代码
ChatCompletionRequest openAiRequest = InternalOpenAiHelper.toOpenAiChatRequest(chatRequest, parameters, this.strictTools, this.strictJsonSchema).build();
我们看下转换之后的内容
content确实没有区别,多了role属性,意义其实是一样的,SYSTEM表示系统提示词,USER表示用户提示词,那为什么这样可以呢?是因为OpenAi提供的接口本来就支持通过这种方式来设置系统提示词,比如:
注意从o1之后的模型api接口sysytem消息用 developer消息代替
大家可以访问https://platform.openai.com/docs/api-reference/chat/create详细了解,同时大家也要注意到OpenAi的接口还支持Assistant message和Tool message两种类型,这两种类型是跟工具机制有关系的,我们后续会进行分析。