I would like to cover the following:
This is what’s needed to fetch from The Movie DB. No packages required; everything happens on the server :)
export default async function Page({ params }: { params: { search: string } }) { const movieData = await searchMovies(params.search); return ( <Suspense fallback={<MovieSearchResultSkeleton />}> {/* @ts-expect-error Async Server Component */} <MovieSearchResult promise={movieData} /> </Suspense> ); }
export default async function Page({ params }: { params: { search: string } }) { const movieData = await searchMovies(params.search); return ( <Suspense fallback={<MovieSearchResultSkeleton />}> {/* @ts-expect-error Async Server Component */} <MovieSearchResult promise={movieData} /> </Suspense> ); }
export async function searchMovies(searchTerm: string): Promise<any> { const res = await fetch( `https://api.themoviedb.org/3/search/movie?api_key=${process.env.THEMOVIEDB_API_KEY}&query=${searchTerm}`, { method: "GET", headers: { "Content-Type": "application/json" } } ); }
export async function searchMovies(searchTerm: string): Promise<any> { const res = await fetch( `https://api.themoviedb.org/3/search/movie?api_key=${process.env.THEMOVIEDB_API_KEY}&query=${searchTerm}`, { method: "GET", headers: { "Content-Type": "application/json" } } ); }
This is a simple example of OpenAi request
const payload: OpenAIStreamPayload = { model: "gpt-3.5-turbo", messages: [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"} ], temperature: 0.9, presence_penalty: 0.6, max_tokens: 340, stream: true, };
const payload: OpenAIStreamPayload = { model: "gpt-3.5-turbo", messages: [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"} ], temperature: 0.9, presence_penalty: 0.6, max_tokens: 340, stream: true, };
Example of my questions
{ name: "Me", message: `Create a modern version of the movie called "${title}" that was released in ${releaseDate}? The updated the plot for a modern audience by including themes of woke-ness, LGBT representation, diversity, and inclusion into the plot. If the main character in the original movie is male, please consider gender-swapping the character. Find new actors for the different roles, they should look like the original actors. Also consider actors that are not known for mainstream movies. Don't use: Zendaya, Emma Stone, Michael B. Jordan. Write a medium lenght synopsis of the movie, without revealing its title. Including the names of the new actors`, }, { name: "AI", message: "", }, { name: "Me", message: "Find a title for this remake. Return title only", }, { name: "AI", message: "", }, { name: "Me", message: `Use the lead actor from the summary to create a character poster. Don't mentioning the character's name in the description use the actors name. Keep the appearance of the character faithful to the original, including clothing and style details. Use the main element of the movie for the background. Avoid using terms like "AI" or "generate." Keep the response brief, with no more than 85 words. Make it in a style like this: (cinematic portrait of ((super mario:1.0) and (princess peach:1.0):1.0) in ((avengers movie:1.0):1.0), (hyperrealism, skin, sharp detail, octane render, soft light:0.9), (by (dave dorman:1.0):1.1)`, },
{ name: "Me", message: `Create a modern version of the movie called "${title}" that was released in ${releaseDate}? The updated the plot for a modern audience by including themes of woke-ness, LGBT representation, diversity, and inclusion into the plot. If the main character in the original movie is male, please consider gender-swapping the character. Find new actors for the different roles, they should look like the original actors. Also consider actors that are not known for mainstream movies. Don't use: Zendaya, Emma Stone, Michael B. Jordan. Write a medium lenght synopsis of the movie, without revealing its title. Including the names of the new actors`, }, { name: "AI", message: "", }, { name: "Me", message: "Find a title for this remake. Return title only", }, { name: "AI", message: "", }, { name: "Me", message: `Use the lead actor from the summary to create a character poster. Don't mentioning the character's name in the description use the actors name. Keep the appearance of the character faithful to the original, including clothing and style details. Use the main element of the movie for the background. Avoid using terms like "AI" or "generate." Keep the response brief, with no more than 85 words. Make it in a style like this: (cinematic portrait of ((super mario:1.0) and (princess peach:1.0):1.0) in ((avengers movie:1.0):1.0), (hyperrealism, skin, sharp detail, octane render, soft light:0.9), (by (dave dorman:1.0):1.1)`, },
The easy way to get starting
// ./app/api/chat/route.js import OpenAI from 'openai' import { OpenAIStream, StreamingTextResponse } from 'ai' const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }) export const runtime = 'edge' export async function POST(req) { const { messages } = await req.json() const response = await openai.chat.completions.create({ model: 'gpt-4', stream: true, messages }) const stream = OpenAIStream(response) return new StreamingTextResponse(stream) }
// ./app/api/chat/route.js import OpenAI from 'openai' import { OpenAIStream, StreamingTextResponse } from 'ai' const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }) export const runtime = 'edge' export async function POST(req) { const { messages } = await req.json() const response = await openai.chat.completions.create({ model: 'gpt-4', stream: true, messages }) const stream = OpenAIStream(response) return new StreamingTextResponse(stream) }
// ./app/page.js 'use client' import { useChat } from 'ai/react' export default function Chat() { const { messages, input, handleInputChange, handleSubmit } = useChat() return ( <div> {messages.map(m => ( <div key={m.id}> {m.role}: {m.content} </div> ))} <form onSubmit={handleSubmit}> <input value={input} placeholder="Say something..." onChange={handleInputChange} /> </form> </div> ) }
// ./app/page.js 'use client' import { useChat } from 'ai/react' export default function Chat() { const { messages, input, handleInputChange, handleSubmit } = useChat() return ( <div> {messages.map(m => ( <div key={m.id}> {m.role}: {m.content} </div> ))} <form onSubmit={handleSubmit}> <input value={input} placeholder="Say something..." onChange={handleInputChange} /> </form> </div> ) }
const { messages: aiImagePrompts, setMessages: setAiImagePromptMesssages, append: appendAiPrompt, } = useChat({ api: "/api/chat/normal", onFinish: (message) => { generateImage(message.content, "Your title here"); // Replace "Your title here" with the desired title }, }); const { messages: remakeTitleMessages, append: appendTitle, setMessages, } = useChat({ api: "/api/chat/normal", onFinish: (message) => { setNewTitle(message.content); setAiImagePromptMesssages([...remakeTitleMessages]); appendAiPrompt({ role: "user", content: `Use the lead actor from the summary to create a character poster.`, }); }, });
const { messages: aiImagePrompts, setMessages: setAiImagePromptMesssages, append: appendAiPrompt, } = useChat({ api: "/api/chat/normal", onFinish: (message) => { generateImage(message.content, "Your title here"); // Replace "Your title here" with the desired title }, }); const { messages: remakeTitleMessages, append: appendTitle, setMessages, } = useChat({ api: "/api/chat/normal", onFinish: (message) => { setNewTitle(message.content); setAiImagePromptMesssages([...remakeTitleMessages]); appendAiPrompt({ role: "user", content: `Use the lead actor from the summary to create a character poster.`, }); }, });
const { messages: plot, append } = useChat({ onFinish: (message) => { setMessages([message]); appendTitle({ role: "user", content: `Find a title for this remake. Return title only`, }); }, });
const { messages: plot, append } = useChat({ onFinish: (message) => { setMessages([message]); appendTitle({ role: "user", content: `Find a title for this remake. Return title only`, }); }, });
export async function POST(req: Request) { const { messages } = await req.json() const response = await openai.chat.completions.create({ model: 'gpt-3.5-turbo-0613', stream: true, messages, functions }) const data = new experimental_StreamData() const stream = OpenAIStream(response, { experimental_onFunctionCall: async ( { name, arguments: args }, createFunctionCallMessages ) => { if (name === 'get_current_weather') { // Call a weather API here const weatherData = { temperature: 20, unit: args.format === 'celsius' ? 'C' : 'F' } data.append({ text: 'Some custom data' })
export async function POST(req: Request) { const { messages } = await req.json() const response = await openai.chat.completions.create({ model: 'gpt-3.5-turbo-0613', stream: true, messages, functions }) const data = new experimental_StreamData() const stream = OpenAIStream(response, { experimental_onFunctionCall: async ( { name, arguments: args }, createFunctionCallMessages ) => { if (name === 'get_current_weather') { // Call a weather API here const weatherData = { temperature: 20, unit: args.format === 'celsius' ? 'C' : 'F' } data.append({ text: 'Some custom data' })
const newMessages = createFunctionCallMessages(weatherData) return openai.chat.completions.create({ messages: [...messages, ...newMessages], stream: true, model: 'gpt-3.5-turbo-0613' }) } }, onCompletion(completion) { console.log('completion', completion) }, onFinal(completion) { data.close() }, experimental_streamData: true }) data.append({ text: 'Hello, how are you?' }) return new StreamingTextResponse(stream, {}, data) }
const newMessages = createFunctionCallMessages(weatherData) return openai.chat.completions.create({ messages: [...messages, ...newMessages], stream: true, model: 'gpt-3.5-turbo-0613' }) } }, onCompletion(completion) { console.log('completion', completion) }, onFinal(completion) { data.close() }, experimental_streamData: true }) data.append({ text: 'Hello, how are you?' }) return new StreamingTextResponse(stream, {}, data) }
Why is streaming important with AI?
Feature/Aspect | SSE Capabilities and Limitations |
---|---|
HTTP Method | - Uses GET , so no POST bodies. |
Query Strings | - Supports query strings. |
Request Body | - Can’t send a request body. |
Headers | - Can set headers, but some browser limits. |
Next.js API
export default function handler(request: NextRequest) { let { readable, writable } = new TransformStream(); var headers = new Headers(); headers.append("Content-Type", "text/event-stream"); headers.append("Connection", "keep-alive"); headers.append("Access-Control-Allow-Origin", "*"); headers.append("Access-Control-Allow-Methods", "GET"); export async function sendEvent(writer, data) { let encoder = new TextEncoder(); await writer.write(`event: add\ndata: ${JSON.stringify(data)}\n\n`); }; return new NextResponse(readable, init);
export default function handler(request: NextRequest) { let { readable, writable } = new TransformStream(); var headers = new Headers(); headers.append("Content-Type", "text/event-stream"); headers.append("Connection", "keep-alive"); headers.append("Access-Control-Allow-Origin", "*"); headers.append("Access-Control-Allow-Methods", "GET"); export async function sendEvent(writer, data) { let encoder = new TextEncoder(); await writer.write(`event: add\ndata: ${JSON.stringify(data)}\n\n`); }; return new NextResponse(readable, init);
useEffect(() => { const source = new EventSource( `/api/remake?releaseDate=${releaseDate}&title=${title}`); source.addEventListener("add", (e: any) => { const json = JSON.parse(e.data); }); ... error handling and cleanup }, []);
useEffect(() => { const source = new EventSource( `/api/remake?releaseDate=${releaseDate}&title=${title}`); source.addEventListener("add", (e: any) => { const json = JSON.parse(e.data); }); ... error handling and cleanup }, []);
SSE Server Sent Events
addEventListener("fetch", (event) => { event.respondWith(fetchAndApply(event.request)); }); async function fetchAndApply(request) { let { readable, writable } = new TransformStream(); var headers = new Headers(); headers.append("Content-Type", "text/event-stream"); headers.append("Cache-Control", "no-cache"); headers.append("Connection", "keep-alive"); headers.append("Access-Control-Allow-Origin", "*"); headers.append( "Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept" ); headers.append("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE"); const url = new URL(request.url); const title = url.searchParams.get("title"); const releaseDate = url.searchParams.get("releaseDate"); askQuestions(title, releaseDate, writable); return new Response(readable, init); }
addEventListener("fetch", (event) => { event.respondWith(fetchAndApply(event.request)); }); async function fetchAndApply(request) { let { readable, writable } = new TransformStream(); var headers = new Headers(); headers.append("Content-Type", "text/event-stream"); headers.append("Cache-Control", "no-cache"); headers.append("Connection", "keep-alive"); headers.append("Access-Control-Allow-Origin", "*"); headers.append( "Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept" ); headers.append("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE"); const url = new URL(request.url); const title = url.searchParams.get("title"); const releaseDate = url.searchParams.get("releaseDate"); askQuestions(title, releaseDate, writable); return new Response(readable, init); }
SSE Server Sent Events
async function sendEvent(writer, data) { let encoder = new TextEncoder(); await writer.write(encoder.encode(`event: add\n`)); await writer.write(encoder.encode(`data: ${JSON.stringify(data)}\n\n`)); } async function askQuestions(title, releaseDate, writable) { ... const stream = await OpenAIStream(payload); const reader = stream.getReader(); const decoder = new TextDecoder(); let output = ""; let done = false; while (!done) { const { value, done: doneReading } = await reader.read(); done = doneReading; const chunkValue = decoder.decode(value); output += chunkValue; await sendEvent(writer, { reply: i, message: chunkValue }); }
async function sendEvent(writer, data) { let encoder = new TextEncoder(); await writer.write(encoder.encode(`event: add\n`)); await writer.write(encoder.encode(`data: ${JSON.stringify(data)}\n\n`)); } async function askQuestions(title, releaseDate, writable) { ... const stream = await OpenAIStream(payload); const reader = stream.getReader(); const decoder = new TextDecoder(); let output = ""; let done = false; while (!done) { const { value, done: doneReading } = await reader.read(); done = doneReading; const chunkValue = decoder.decode(value); output += chunkValue; await sendEvent(writer, { reply: i, message: chunkValue }); }
Feature/Aspect | Chunked Transfer Encoding Considerations |
---|---|
HTTP Method | - Works with various methods (GET , POST |
Data Integrity | - Ensure chunks are correctly assembled on client side. |
Performance | - Small chunks can decrease efficiency. |
Client Support | - Not all clients handle chunked encoding well. |
Intermediary Servers | - Some proxies/buffers might not support or could alter chunked data. |
exports.handler = awslambda.streamifyResponse( async (event, responseStream, context) => { const queryStringParameters = event.queryStringParameters; const title = queryStringParameters ? queryStringParameters.title : null; const releaseDate = queryStringParameters ? queryStringParameters.releaseDate : null; const httpResponseMetadata = { statusCode: 200, headers: { "Content-Type": "application/json", "Transfer-Encoding": "chunked", }, }; responseStream = awslambda.HttpResponseStream.from( responseStream, httpResponseMetadata ); await askQuestions(title, releaseDate, responseStream); responseStream.end(); } );
exports.handler = awslambda.streamifyResponse( async (event, responseStream, context) => { const queryStringParameters = event.queryStringParameters; const title = queryStringParameters ? queryStringParameters.title : null; const releaseDate = queryStringParameters ? queryStringParameters.releaseDate : null; const httpResponseMetadata = { statusCode: 200, headers: { "Content-Type": "application/json", "Transfer-Encoding": "chunked", }, }; responseStream = awslambda.HttpResponseStream.from( responseStream, httpResponseMetadata ); await askQuestions(title, releaseDate, responseStream); responseStream.end(); } );
const stream = await OpenAIStream(payload); const reader = stream.getReader(); const decoder = new TextDecoder(); let output = ""; let done = false; while (!done) { const { value, done: doneReading } = await reader.read(); done = doneReading; const chunkValue = decoder.decode(value); output += chunkValue; responseStream.write({ reply: i, message: chunkValue }); }
const stream = await OpenAIStream(payload); const reader = stream.getReader(); const decoder = new TextDecoder(); let output = ""; let done = false; while (!done) { const { value, done: doneReading } = await reader.read(); done = doneReading; const chunkValue = decoder.decode(value); output += chunkValue; responseStream.write({ reply: i, message: chunkValue }); }
useEffect(() => { const fetchData = async () => { try { const response = await fetch(`${process.env.AWS_REMAKE_URL}?releaseDate=${releaseDate}&title={title`); const reader = response.body?.getReader(); const decoder = new TextDecoder("utf-8"); if (reader) { let buffer = ""; const readChunk = async () => { const { value, done } = await reader.read(); if (done) return; buffer += decoder.decode(value, { stream: true }); let newlineIndex; while ((newlineIndex = buffer.indexOf("\n")) !== -1) { const jsonStr = buffer.slice(0, newlineIndex); buffer = buffer.slice(newlineIndex + 1); if (jsonStr !== "" && jsonStr !== "[]") { const json = JSON.parse(jsonStr); ... do stuff } } readChunk(); }; readChunk(); } } catch (error) { console.error("Error fetching data:", error); } }; fetchData(); }, []);
useEffect(() => { const fetchData = async () => { try { const response = await fetch(`${process.env.AWS_REMAKE_URL}?releaseDate=${releaseDate}&title={title`); const reader = response.body?.getReader(); const decoder = new TextDecoder("utf-8"); if (reader) { let buffer = ""; const readChunk = async () => { const { value, done } = await reader.read(); if (done) return; buffer += decoder.decode(value, { stream: true }); let newlineIndex; while ((newlineIndex = buffer.indexOf("\n")) !== -1) { const jsonStr = buffer.slice(0, newlineIndex); buffer = buffer.slice(newlineIndex + 1); if (jsonStr !== "" && jsonStr !== "[]") { const json = JSON.parse(jsonStr); ... do stuff } } readChunk(); }; readChunk(); } } catch (error) { console.error("Error fetching data:", error); } }; fetchData(); }, []);
Learn more about AWS Lambda response streaming:
https://aws.amazon.com/blogs/compute/introducing-aws-lambda-response-streaming/
For a code example, check out:
https://github.com/aws-samples/serverless-patterns/tree/main/lambda-streaming-ttfb-write-sam
Randomize your question
const topics = [ "Diverse casting: Remakes feature more diverse casts, promoting representation on screen.", "Updated references: Cultural references, jokes, or language are modernized for today's audience.", "Gender swaps: Key characters' genders may be swapped, offering fresh perspectives.", "Environmental themes: Remakes may incorporate environmental messages or eco-friendly practices.", "Modernized settings: Settings and backdrops are updated to reflect contemporary life.", "Social issues: Themes like mental health or LGBTQ+ rights may be included to raise awareness.", "Expanded female roles: Female characters are given more agency and complex storylines.", "Evolving dynamics: Character dynamics change, such as introducing same-sex relationships.", "Tonal shifts: The tone may be altered to fit contemporary preferences, e.g., more comedic.", "Reinterpretation: Remakes take creative liberties, altering storylines or characters.", ]; // Randomly select 2-3 topics for the plot. const selectedTopics = topics .sort(() => 0.5 - Math.random()) .slice(0, Math.floor(Math.random() * 2) + 2);
const topics = [ "Diverse casting: Remakes feature more diverse casts, promoting representation on screen.", "Updated references: Cultural references, jokes, or language are modernized for today's audience.", "Gender swaps: Key characters' genders may be swapped, offering fresh perspectives.", "Environmental themes: Remakes may incorporate environmental messages or eco-friendly practices.", "Modernized settings: Settings and backdrops are updated to reflect contemporary life.", "Social issues: Themes like mental health or LGBTQ+ rights may be included to raise awareness.", "Expanded female roles: Female characters are given more agency and complex storylines.", "Evolving dynamics: Character dynamics change, such as introducing same-sex relationships.", "Tonal shifts: The tone may be altered to fit contemporary preferences, e.g., more comedic.", "Reinterpretation: Remakes take creative liberties, altering storylines or characters.", ]; // Randomly select 2-3 topics for the plot. const selectedTopics = topics .sort(() => 0.5 - Math.random()) .slice(0, Math.floor(Math.random() * 2) + 2);
Looking up the movie, to support never movies than from 2021
Setup function definition
const functions: ChatCompletionFunctions[] = [ { name: "get_movie_info", description: "Get movie information based on movieId", parameters: { type: "object", properties: { movieId: { type: "string", description: "The movieId", }, }, required: ["movieId"], }, }, ];
const functions: ChatCompletionFunctions[] = [ { name: "get_movie_info", description: "Get movie information based on movieId", parameters: { type: "object", properties: { movieId: { type: "string", description: "The movieId", }, }, required: ["movieId"], }, }, ];
Call the API. The question needs to be clear enough for the model to understand that it should call the function.
Make a remake of movieId:192, use the title form the response and don't use movieId in the response
Make a remake of movieId:192, use the title form the response and don't use movieId in the response
const response = await openai.createChatCompletion({ model: "gpt-4-0613", stream: true, messages: replacedMessages, functions, });
const response = await openai.createChatCompletion({ model: "gpt-4-0613", stream: true, messages: replacedMessages, functions, });
Call the API. The question needs to be clear enough for the model to understand that it should call the function.
const stream = OpenAIStream(response, { experimental_onFunctionCall: async ( { name, arguments: args }, createFunctionCallMessages ) => { if (name === "get_movie_info") { //logic that get the moiveDetails newMessages = createFunctionCallMessages(movieDetails as any); return openai.createChatCompletion({ messages: [...messages, ...newMessages], stream: true, model: "gpt-4-0613", functions, }); } }, });
const stream = OpenAIStream(response, { experimental_onFunctionCall: async ( { name, arguments: args }, createFunctionCallMessages ) => { if (name === "get_movie_info") { //logic that get the moiveDetails newMessages = createFunctionCallMessages(movieDetails as any); return openai.createChatCompletion({ messages: [...messages, ...newMessages], stream: true, model: "gpt-4-0613", functions, }); } }, });
const result = await openai.createEmbedding({ model: 'text-embedding-ada-002', 'My text', });
const result = await openai.createEmbedding({ model: 'text-embedding-ada-002', 'My text', });
{ "data": [ { "embedding": [ -0.006929283495992422, -0.005336422007530928, ... -4.547132266452536e-05, -0.024047505110502243 ], "index": 0, "object": "embedding" } ], "model": "text-embedding-ada-002", "object": "list", "usage": { "prompt_tokens": 5, "total_tokens": 5 } }
{ "data": [ { "embedding": [ -0.006929283495992422, -0.005336422007530928, ... -4.547132266452536e-05, -0.024047505110502243 ], "index": 0, "object": "embedding" } ], "model": "text-embedding-ada-002", "object": "list", "usage": { "prompt_tokens": 5, "total_tokens": 5 } }
LangChain is a framework designed for the development of applications powered by language models. It offers:
const chain = ConversationalRetrievalQAChain.fromLLM( llm, vectorStore.asRetriever(5), { returnSourceDocuments: true, memory: new BufferMemory({ chatHistory: chatHistory, memoryKey: 'chat_history', inputKey: 'question', // The key for the input to the chain outputKey: 'text', returnMessages: true // If using with a chat model }), verbose: true, questionGeneratorChainOptions: { llm: nonStreamingModel } } )
const chain = ConversationalRetrievalQAChain.fromLLM( llm, vectorStore.asRetriever(5), { returnSourceDocuments: true, memory: new BufferMemory({ chatHistory: chatHistory, memoryKey: 'chat_history', inputKey: 'question', // The key for the input to the chain outputKey: 'text', returnMessages: true // If using with a chat model }), verbose: true, questionGeneratorChainOptions: { llm: nonStreamingModel } } )
const qa_template = `Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. {context} Question: {question} Helpful Answer:`; const question_generator_template = `Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. Chat History: {chat_history} Follow Up Input: {question} Standalone question:`;
const qa_template = `Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. {context} Question: {question} Helpful Answer:`; const question_generator_template = `Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. Chat History: {chat_history} Follow Up Input: {question} Standalone question:`;
Feature/Aspect | OpenAI | Azure OpenAI |
---|---|---|
Services | ChatGPT, GPT-3, Code, DALL·E | Access to OpenAI’s models like GPT-3, etc |
Data Processing Location | Mostly within the US. | Specific Azure regions: East US, South Central US, West Europe. |
Data Encryption | - | Data encrypted with Microsoft managed keys. |
Data Retention | - | Data related to prompts, queries, responses stored temporarily for up to 30 days. |
Enterprise Features | - | Offers security, compliance, regional availability, and more. |
Integration & Connectivity | - | Integration with other Azure Cognitive services and network features for more control over the service. |