ESC
NuxtStarterAI logo
Menu
On this page

Tutorials > AI

Setting up and connecting AI apps takes a lot of time and expertise. 🥵


But with NuxtStarter.ai, you can quickly integrate your AI models âš¡ Let's see how you can deploy your customized AI models on GPU providers like Runpod or Replicate and add them to your NuxtStarter project.


Deploying your own customized models

You don't have to create your own special models to begin. If your project doesn't require custom models, you can use the pre-made ones from Runpod or Replicate.

Basically, all you need are custom endpoints to send requests to your models. Just get your model's endpoint URL from one of the following services.


RunPod

Runpod Serverless

Because we're all about quick delivery, we'll show you how to deploy your models on a serverless platform so you don't have to concern yourself with infrastructure and can concentrate on building your app. 🚀

1. In the Runpod dashboard, click on [Templates] > [+ New Template]. Here, you need to choose the Dockerfile for your model. If you want to learn more about creating a Dockerfile, you can check out this Rundpod Worker Tutorial.


2. Once you've created your container template, go to [Serverless] section, and click on [+ New Endpoint]. Here, choose the template you created in the previous step.


3. Name your endpoint, choose one of the available GPUs, and set up how you want your instance to scale up. Then, click on [Deploy].


4. Your serverless endpoint will be ready in a couple of minutes. Once it's ready, click on your endpoint and find its runsync endpoint inside the endpoint page. It should look something like this: https://api.runpod.ai/v2/123456789/runsync


5. In your NuxtStarter AI project, you can use this URL to send requests to your model. Refer to the Runpod Features section to understand how to send requests to your model using RunPod.


Replicate

1. Retrieve your API Key from your account settings.


2. After that, you'll need to upload your model to Replicate using the Cog packaging library. You can easily accomplish this by following the instructions on the official documentation.


3. Once you've uploaded your model to Replicate, you can utilize the API key to send requests to your model. Please refer to the Replicate Features for more detailed Replicate documentation.


4- [Optional] If you require more control over your deployed model, you can adjust the model deployment settings. You can find more information about this in the official documentation.


Replicate Deployment

We recommend keeping it simple and sticking with the default endpoints provided by Replicate. 🚀

ChatGPT

You can easily add ChatGPT to your NuxtStarter AI project to make chatbots, conversational interfaces, and more. We handle all the basic setup and retry logic for you👌.

Just get your OpenAI API key, input your prompt, and receive the response from the ChatGPT model. Refer to the
ChatGPT Features to understand how to add ChatGPT to your NuxtStarter AI project.