TNS
VOXPOP
Terraform or Bust?
Has your organization used or evaluated a Terraform alternative since new restrictions were placed on its licensing?
We have used Terraform, but are now piloting or using an open source alternative like OpenTofu.
0%
We never used Terraform, but have recently piloted or used alternatives.
0%
We don't use Terraform and don't plan to use or evaluate alternatives.
0%
We use Terraform and are satisfied with the results
0%
We are waiting to see what IBM will do with Terraform.
0%
AI / Frontend Development

Combining AI with React for a Smarter Frontend

Jesse Hall, senior developer advocate with MongoDB, explained the building blocks for integrating artificial intelligence into React apps.
Nov 21st, 2023 12:30pm by
Featued image for: Combining AI with React for a Smarter Frontend
Image via Unsplash 

Frontend development will have to incorporate artificial intelligence sooner rather than later. The burning questions though are what does that even look like and must it be a chatbot?

“Almost every application going forward is going to use AI in some capacity, AI is going to wait for no one,” said Jesse Hall, a senior developer advocate at MongoDB, during last week’s second virtual day of React Summit US. “In order to stay competitive, we need to build intelligence into our applications in order to gain rich insights from our data.”

A Tech Stack for React AI Apps

First, developers can take custom data — image, blogs, videos, articles, PDFs, whatever — and generate embeddings using an embedding model, then store those embeddings in a vector database. It doesn’t necessitate LangChain, but that can be helpful in facilitating that process, he added. Once the embeddings are created, it’s possible to accept natural language queries to find relevant information from that custom data, he explained.

Jesse Hall explains RAG

MongoDB Senior Developer Advocate Jesse Hall explains the RAG workflow.

“We send the user’s natural language query to an LLM, which vectorizes the query, then we use vector search to find information that is closely related — semantically related — to the user’s query, and then we return those results,” Hall said.

For example, the results might provide a text summary or links to specific document pages, he added.

“Imagine your React app has an intelligent chatbot with RAG [Retrieval Augmented Generation] and vector embeddings. This chatbot could pull in real-time data, maybe the latest product inventory, and offer it during a customer service interaction, [using] RAG and vector embeddings,” he said. “Your React app isn’t just smart, it’s adaptable, real-time and incredibly context-aware.”

To put a tech stack around that, he suggested developers could use Next.js version 13.5 with Vercel’s app router, then connect with OpenAI’s Chat GPT 3.5, Turbo and GPT 4. Then LangChain could be a crucial part of the stack because it helps with data pre-processing, routing data to the proper storage, and making the AI part of the app more efficient, he said. He also suggested using Vercel’s AI SDK, an open source library designed to build conversational, streaming user interfaces.

Then, not surprisingly for a MongoDB developer advocate, he suggested leveraging MongoDB to store the vector embeddings and MongoDB Atlas Vector Search.

“It’s a game changer for AI applications, enabling us to provide a more contextual and meaningful user experience by storing our vector embeddings directly in our application database, instead of bolting on yet another external service,” he said. “And it’s not just vector search. MongoDB Atlas itself brings a new level of power to our generative AI capabilities. “

When combined, this technology stack would enable smarter, more powerful React applications, he said.

“Remember, the future is not just about smarter AI, but also about how well it’s integrated into user-centric platforms like your next React-based project,” Hall said.

How to Approach GPTs

Hall, who also creates the YouTube show codeSTACKr, also broke down the terms and technology that developers need in order to incorporate artificial intelligence into their React applications, starting with what to do with general pre-trained models (GPTs).

“It’s not merely about leveraging the power of GPT in React. It’s about taking your React applications to the next level by making them intelligent and context-aware,” Hall said. “We’re not just integrating AI into React, we’re optimizing it to be as smart and context-aware as possible.”

There’s a huge demand for building intelligence into applications and to make faster, personalized experiences for users, he added. Smarter apps will use AI-powered models to take action autonomously for the user. That could look like a chatbot, but it could also look like personalized recommendations and fraud detection.

The results will be two-fold, Hall said.

“First, your apps drive competitive advantage by deepening user engagement and satisfaction as they interact with your application,” he explained. “Second, your apps unlock higher efficiency and profitability by making intelligent decisions faster on fresher, more accurate data.”

AI will be used to power the user-facing aspects of applications, but it will also lead to “fresh data and insights” from those interactions, which in turn will power a more efficient business decision model, he said.

GPTs, Meet React

Drilling down on GPTs, aka large language models, he noted that GPTs are not perfect.

“One of their key limitations is their static knowledge base,” he said. “They only know what they’ve been trained on. There are integrations with some models now that can search the internet for newer information. But how do we know that the information that they’re finding on the internet is accurate? They can hallucinate very confidently, I might add. So how can we minimize this?”

The models can be made to be real-time, adaptable and more aligned with specific needs by using React, large language models and RAG, he explained.

“We’re not just integrating AI into React, we’re optimizing it to be as smart and context-aware as possible,” he said.

He explained what’s involved with RAG, starting with vectors. Vectors are the building blocks that allow developers to represent complex multidimensional data in a format that’s easy to manipulate and understand. Sometimes, vectors are referred to as vector embeddings, or just embedding.

“Now the simplest explanation is a vector is a numerical representation of data and array of numbers. And these numbers are coordinates in an n-dimensional space, where n is the array length. So, however, how many numbers we have in the array is how many dimensions we have,” he explained.

For example, video games use 2-D and 3-D coordinates to know where objects are in the games world. But what makes vectors important in AI is that they enable semantic search, he said.

“In simpler terms, they let us find information that is contextually relevant, not just a keyword search,” Hall said. “And the data source is not just limited to text. It can also be images, video, or audio — these can all be converted to vectors.”

So step one would be creating vectors, and the way to do that is through an encoder. Encoders define how the information is organized in the virtual space, and there are different types of encoders that can organize vectors in different ways, Hall explained. For example, there are encoders for text, audio, images, etc. Most of the popular encoders can be found on Hugging Face or OpenAI, he added.

Finally, RAG comes into play. RAG is “an AI framework for retrieving acts from an external knowledge base to ground large language models (LLMs) on the most accurate, up-to-date information and to give users insight into LLMs’ generative process,” according to IBM.

It does so by bringing together generative models with vector databases and LangChain.

“RAG leverages vectors to pull in real-time, context-relevant data and to augment the capabilities of an LLM,” Hall explained. “Vector search capabilities can augment the performance and accuracy of GPT models by providing a memory or a ground truth to reduce hallucinations, provide up-to-date information, and allow access to private data.”

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.