Contributors: Nicole Ren (GovTech), Ng Wei Cheng (GovTech)
VICA (Digital Clever Chat Assistant) is GovTech’s Digital Assistant platform that leverages Synthetic Intelligence (AI) to permit customers to create, prepare and deploy chatbots on their web sites. On the time of writing, VICA helps over 100 chatbots and handles over 700,000 person queries in a month.
Behind the scenes, VICA’s NLP engine makes use of varied applied sciences and frameworks starting from conventional intent-matching techniques to generative AI frameworks like Retrieval Augmented Era (RAG). By conserving updated with state-of-the-art applied sciences, our engine is consistently evolving, guaranteeing that each citizen’s question will get matched to the very best reply.
Past easy Query-And-Reply (Q&A) capabilities, VICA goals to supercharge chatbots via conversational transactions. Our objective is to say goodbye to the robotic and awkward form-like expertise inside a chatbot, and say howdy to customized conversations with human-like help.
This text is the primary in a two half article sequence to share extra concerning the generative AI options we’ve inbuilt VICA. On this article, we’ll concentrate on how LLM brokers may help enhance the transaction course of in chatbots via utilizing LangChain’s Agent Framework.
- Introduction
- All about LangChain
- LangChain in manufacturing
- Challenges of productionizing LangChain
- Use case of LLM Brokers
- Conclusion
- Discover out extra about VICA
- Acknowledgements
- References
Transaction-based chatbots are conversational brokers designed to facilitate and execute particular transactions for customers. These chatbots transcend easy Q&A interactions that happen by permitting customers to carry out duties reminiscent of reserving, buying, or type submission straight throughout the chatbot interface.
With a purpose to carry out transactions, the chatbots should be custom-made on the backend to deal with further person flows and make API calls.
With the rise of Massive Language Fashions (LLMs), it has opened new avenues for simplifying and enhancing the event of those options for chatbots. LLMs can enormously enhance a chatbot’s means to understand and reply to a variety of queries, serving to to handle advanced transactions extra successfully.
Regardless that intent-matching chatbot techniques exist already to information customers via predefined flows for transactions, LLMs supply important benefits by sustaining context over multi-turn interactions and dealing with a variety of inputs and language variations. Beforehand, interactions typically felt awkward and stilted, as customers had been required to pick choices from premade playing cards or kind particular phrases as a way to set off a transaction move. For instance, a slight variation from “Can I make a cost?” to “Let me pay, please” might stop the transaction move from triggering. In distinction, LLMs can adapt to varied communication types permitting them to interpret person enter that doesn’t match neatly into predefined intents.
Recognizing this potential, our crew determined to leverage LLMs for transaction processing, enabling customers to enter transaction flows extra naturally and flexibly by breaking down and understanding their intentions. Provided that LangChain gives a framework for implementing agentic workflows, we selected to make the most of their agent framework to create an clever system to course of transactions.
On this article, we can even share two use instances we developed that make the most of LLM Brokers, specifically The Division of Statistics (DOS) Statistic Desk Builder, and the Pure Dialog Facility Reserving chatbot.
Earlier than we cowl how we made use of LLM Brokers to carry out transactions, we’ll first share on what’s LangChain and why we opted to experiment with this framework.
What’s LangChain?
LangChain is an open-source Python framework designed to help builders in constructing AI powered functions leveraging LLMs.
Why use LangChain?
The framework helps to simplify the event course of by offering abstractions and templates that allow fast software constructing, saving time and lowering the necessity for our improvement crew to code every part from scratch. This permits for us to concentrate on higher-level performance and enterprise logic quite than low-level coding particulars. An instance of that is how LangChain helps to streamline third social gathering integration with common service suppliers like MongoDB, OpenAI, and AWS, facilitating faster prototyping and lowering the complexity of integrating numerous providers. These abstractions not solely speed up improvement but additionally enhance collaboration by offering a constant construction, permitting our crew to effectively construct, check, and deploy AI functions.
What’s LangChain’s Agent Framework?
One of many important options of utilizing Langchain is their agent framework. The framework permits for administration of clever brokers that work together with LLMs and different instruments to carry out advanced duties.
The three important parts of the framework are
Brokers act as a reasoning engine as they determine the suitable actions to take and the order to take these actions. They make use of an LLM to make the selections for them. An agent has an AgentExecutor that calls the agent and executes the instruments the agent chooses. It additionally takes the output of the motion and passes it to the agent till the ultimate consequence is reached.
Instruments are interfaces that the agent could make use of. With a purpose to create a software, a reputation and outline must be supplied. The outline and identify of the software are essential as it will likely be added into the agent immediate. Because of this the agent will determine the software to make use of primarily based on the identify and outline supplied.
A series check with sequences of calls. The chain could be coded out steps or only a name to an LLM or a software. Chains could be custom-made or be used off-the-shelf primarily based on what LangChain supplies. A easy instance of a series is LLMChain, a series that run queries towards LLMs.
How did we use LangChain in VICA?
In VICA, we arrange a microservice for LangChain invoked via REST API. This helps to facilitate integration by permitting completely different parts of VICA to speak with LangChain independently. Consequently, we will effectively construct our LLM agent with out being affected by adjustments or improvement in different parts of the system.
LangChain as a framework is fairly in depth relating to the LLM house, overlaying retrieval strategies, brokers and LLM analysis. Listed here are the parts we made use of when creating our LLM Agent.
ReAct Agent
In VICA, we made use of a single agent system. The agent makes use of ReAct logic to find out the sequence of actions to take (Yao et al., 2022). This immediate engineering method will assist generate the next:
- Thought (Reasoning taken earlier than selecting the motion)
- Motion (Motion to take, typically a software)
- Motion Enter (Enter to the motion)
- Statement (Statement from the software output)
- Last Reply (Generative closing reply that the agent returns)
> Coming into new AgentExecutor chain…
The person needs to know the climate at the moment
Motion: Climate Device
Motion Enter: "Climate at the moment"
Statement: Reply: "31 Levels Celsius, Sunny"
Thought: I now know the ultimate reply.
Last Reply: The climate at the moment is sunny at 31 levels celsius.
> Completed chain.
Within the above instance, the agent was in a position to perceive the person’s intention prior to selecting the software to make use of. There was additionally verbal reasoning being generated that helps the mannequin plan the sequence of motion to take. If the statement is inadequate to reply the query given, the agent can cycle to a distinct motion as a way to get nearer to the ultimate reply.
In VICA, we edited the agent immediate to raised swimsuit our use case. The bottom immediate supplied by LangChain (hyperlink right here) is mostly ample for most typical use instances, serving as an efficient start line. Nevertheless, it may be modified to reinforce efficiency and guarantee larger relevance to particular functions. This may be carried out through the use of a customized immediate earlier than passing it as a parameter to the create_react_agent (could be completely different primarily based in your model of LangChain).
To find out if our customized immediate was an enchancment, we employed an iterative immediate engineering strategy: Write, Consider and Refine (extra particulars right here). This course of ensured that the immediate generalized successfully throughout a broad vary of check instances. Moreover, we used the bottom immediate supplied by LangChain as a benchmark to judge our customized prompts, enabling us to evaluate their efficiency with various further context throughout numerous transaction situations.
Customized Instruments & Chains (Immediate Chaining)
For the 2 customized chatbot options on this article, we made use of customized instruments that our Agent could make use of to carry out transactions. Our customized instruments make use of immediate chaining to breakdown and perceive a person’s request earlier than deciding what to do within the specific software.
Immediate chaining is a method the place a number of prompts are utilized in sequence to deal with advanced duties or queries. It includes beginning with an preliminary immediate and utilizing its output as enter for subsequent prompts, permitting for iterative refinement and contextual continuity. This methodology enhances the dealing with of intricate queries, improves accuracy, and maintains coherence by progressively narrowing down the main target.
For every transaction use case, we broke the method into a number of steps, permitting us to provide clearer directions to the LLM at every stage. This methodology improves accuracy by making duties extra particular and manageable. We can also inject localized context into the prompts, which clarifies the aims and enhances the LLM’s understanding. Based mostly on the LLM’s reasoning, our customized chains will make requests to exterior APIs to collect information to carry out the transaction.
At each step of immediate chaining, it’s essential to implement error dealing with, as LLMs can typically produce hallucinations or inaccurate responses. By incorporating error dealing with mechanisms reminiscent of validation checks, we recognized and addressed inconsistencies or errors within the outputs. This allowed us to generate fallback responses to our customers that defined what the LLM did not motive at.
Lastly, in our customized software, we kept away from merely utilizing the LLM generated output as the ultimate response as a result of danger of hallucination. As a citizen going through chatbot, it’s essential to forestall our chatbots from disseminating any deceptive or inaccurate info. Due to this fact, we be sure that all responses to person queries are derived from precise information factors retrieved via our customized chains. We then format these information factors into pre-defined responses, guaranteeing that customers don’t see any direct output generated by the LLM.
Challenges of utilizing LLMs
Problem #1: Immediate chaining results in gradual inference time
A problem with LLMs is their inference occasions. LLMs have excessive computational calls for as a result of their giant variety of parameters and having to be known as repeatedly for actual time processing, resulting in comparatively gradual inference occasions (a number of seconds per immediate). VICA is a chatbot that will get 700,000 queries in a month. To make sure person expertise, we intention to offer our responses as shortly as attainable whereas guaranteeing accuracy.
Immediate chaining will increase the consistency, controllability and reliability of LLM outputs. Nevertheless, every further chain we incorporate considerably slows down our answer because it necessitates making an additional LLM request. To steadiness simplicity with effectivity, we set a tough restrict on the variety of chains to forestall extreme wait occasions for customers. We additionally opted to not use higher performing LLM fashions reminiscent of GPT-4 as a result of their slower velocity, however opted for sooner however typically properly performing LLMs.
Problem #2 :Hallucination
As seen within the current incident with Google’s function, AI Overview, having LLMs producing outputs can result in inaccurate or non-factual particulars. Regardless that grounding the LLM makes it extra constant and fewer more likely to hallucinate, it doesn’t eradicate hallucination.
As talked about above, we made use of immediate chaining to carry out reasoning duties for transactions by breaking it down into smaller, simpler to grasp duties. By chaining LLMs, we’re in a position to extract the data wanted to course of advanced queries. Nevertheless, for the ultimate output, we crafted non-generative messages as the ultimate response from the reasoning duties that the LLM performs. Because of this in VICA, our customers don’t see generated responses from our LLM Agent.
Problem #1: An excessive amount of abstraction
The primary concern with LangChain is that the framework abstracts away too many particulars, making it very troublesome to customise functions for particular actual world use instances.
With a purpose to overcome such limitations, we needed to delve into the package deal and customise sure courses to raised swimsuit our use case. As an example, we modified the AgentExecutor class to route the ReAct agent’s motion enter into the software that was chosen. This gave our customized instruments further context that helped with extracting info from person queries.
Problem #2: Lack of documentation
The second concern is the dearth of documentation and the continually evolving framework. This makes improvement troublesome because it takes time to grasp how the framework works via trying on the package deal code. There’s additionally a scarcity of consistency on how issues work, making it troublesome to select issues up as you go. Additionally with fixed updates on present courses, an improve in model may end up in beforehand working code all of the sudden breaking.
If you’re planning to make use of LangChain in manufacturing, an recommendation can be to repair your manufacturing model and check earlier than upgrading.
Use case #1: Division of Statistics (DOS) Desk builder
In the case of statistical information about Singapore, customers can discover it troublesome to search out and analyze the data that they’re on the lookout for. To handle this concern, we got here up with a POC that goals to extract and current statistical information in a desk format as a function in our chatbot.
As DOS’s API is open for public use, we made use of the API documentation that was supplied of their web site. Utilizing LLM’s pure language understanding capabilities, we handed the API documentation into the immediate. The LLM was then tasked to select the proper API endpoint primarily based on what the statistical information that the person was asking for. This meant that customers might ask for statistical info for annual/half-yearly/quarterly/month-to-month information in share change/absolute values in a given time filter. For instance, we’re in a position to question particular info reminiscent of “GDP for Building in 2022” or “CPI in quarter 1 for the previous 3 years”.
We then did additional immediate chaining to interrupt the duty down much more, permitting for extra consistency in our closing output. The queries had been then processed to generate the statistics supplied in a desk. As all the data had been obtained from the API, not one of the numbers displayed are generated by LLMs thus avoiding any danger of spreading non-factual info.
Use case #2: Pure Dialog Facility Reserving Chatbot
In at the moment’s digital age, the vast majority of bookings are carried out via on-line web sites. Relying on the person interface, it might be a course of that entails sifting via quite a few dates to safe an accessible slot, making it troublesome as you would possibly must look via a number of dates to search out an accessible reserving slot.
Reserving via pure dialog might simplify this course of. By simply typing one line reminiscent of “I wish to e book a badminton court docket at Fengshan at 9.30 am”, you’ll be capable to get a reserving or suggestions from a digital assistant.
In the case of reserving a facility, there are three issues we’d like from a person:
- The ability kind (e.g. Badminton, Assembly room, Soccer)
- Location (e.g. Ang Mo Kio, Maple Tree Enterprise Centre, Hive)
- Date (this week, 26 Feb, at the moment)
As soon as we’re in a position to detect these info from pure language, we will create a customized reserving chatbot that’s reusable for a number of use instances (e.g. the reserving of hotdesk, reserving of sports activities services, and many others).
The above instance illustrates a person inquiring concerning the availability of a soccer subject at 2.30pm. Nevertheless, the person is lacking a required info which is the date. Due to this fact, the chatbot will ask a clarifying query to acquire the lacking date. As soon as the person supplies the date, the chatbot will course of this multi-turn dialog and try to search out any accessible reserving slots that matches the person’s request. As there was a reserving slot that matches the person’s precise description, the chatbot will current this info as a desk.
If there are not any accessible reserving slots accessible, our facility reserving chatbot would broaden the search, exploring completely different timeslots or growing the search date vary. It might additionally try and suggest customers accessible reserving slots primarily based on their earlier question if there their question ends in no accessible bookings. This goals to reinforce the person expertise by eliminating the necessity to filter out unavailable dates when making a reserving, saving customers the effort and time.
As a result of we use LLMs as our reasoning engine, an extra profit is their multilingual capabilities, which allow them to motive and reply to customers writing in several languages.
The instance above illustrates the chatbot’s means to precisely course of the proper facility, dates, and site from the person’s message that was written in Korean to provide the suitable non-generative response though there are not any accessible slots for the date vary supplied.
What we demonstrated was a short instance of how our LLM Agent handles facility reserving transactions. In actuality, the precise answer is much more advanced, having the ability to give a number of accessible bookings for a number of areas, deal with postal codes, deal with areas too removed from the said location, and many others. Though we wanted to make some modifications to the package deal to suit our particular use case, LangChain’s Agent Framework was helpful in serving to us chain a number of prompts collectively and use their outputs within the ReAct Agent.
Moreover, we designed this custom-made answer to be simply extendable to any comparable reserving system that requires reserving via pure language.
On this first a part of our sequence, we explored how GovTech’s Digital Clever Chat Assistant (VICA) leverages LLM Brokers to reinforce chatbot capabilities, notably for transaction-based chatbots.
By integrating LangChain’s Agent Framework into VICA’s structure, we demonstrated its potential via the Division of Statistics (DOS) Desk Builder and Facility Reserving Chatbot use instances. These examples spotlight how LangChain can streamline advanced transaction interactions, enabling chatbots to deal with transaction associated duties like information retrieval and reserving via pure dialog.
LangChain gives options to shortly develop and prototype subtle chatbot options, permitting builders to harness the facility of enormous language fashions effectively. Nevertheless, challenges like inadequate documentation and extreme abstraction can result in elevated upkeep efforts as customizing the framework to suit particular wants could require important time and sources. Due to this fact, evaluating an in-house answer would possibly supply larger long run customizability and stability.
Within the subsequent article, we can be overlaying how chatbot engines could be improved via understanding multi-turn conversations.
Curious concerning the potential of AI chatbots? If you’re a Singapore public service officer, you possibly can go to our web site at https://www.vica.gov.sg/ to create your individual customized chatbot and discover out extra!
Particular due to Wei Jie Kong for establishing necessities for the Facility Reserving Chatbot. We additionally want to thank Justin Wang and Samantha Yom, our hardworking interns, for his or her preliminary work on the DOS Desk builder.
Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, Okay., & Cao, Y. (2022). React: Synergizing reasoning and performing in language fashions. arXiv preprint arXiv:2210.03629.