Add GPT-4o to your Rails 7 app: Get started with Turbo streams

Linked in logoX logoFacebook logo
Luigi Rojas
May 30, 2024

The current tech landscape is brimming with an ever-increasing demand for AI products and functionalities, and Ruby on Rails developers are not being left behind. In this guide, we hope to prove how easy it is to build your next AI-focused app or feature quickly and easily.

We’ll go over the basic steps to build such an app, making it feel blazingly fast and interactive using Turbo Streams and leveraging all the benefits of GPT-4o’s amazing text generation capabilities (and it doesn’t have to be just another chatbot!).

Initial Configuration

For this integration, we’re going to use the ruby-openai gem. First, let’s add gem “ruby-openai” to our Gemfile and install it by running bundle install.

Next, we’ll create an initializer to hold our OpenAI API key. It is recommended that you store it securely in config/credentials.yml.enc, or as an ENV variable using dotenv.

# config/initializers/openai.rb

OpenAI.configure do |config|
  config.access_token = Rails.application.credentials.dig(:openai_access_token)
end

You can use this initializer to set up many default values, but that’s all we need for now! Let us move on to the good stuff.

Talking to the API

Initiating a conversation with ChatGPT is as simple as this:

client = OpenAI::Client.new

response = client.chat(
    parameters: {
        model: "gpt-4o",
        messages: [{ role: "user", content: "Hello!"}]
    }
)

puts response.dig("choices", 0, "message", "content")
# => "Hello! How may I assist you today?"

We only need to instantiate an OpenAi::Client class, and then call the method corresponding to the endpoint we want to use. In this case, chat uses the “/chat/completions” endpoint.

We can also send tons of parameters as part of the request, but only these two are required:

  1. model: Specifies which model to use (i.e.  gpt-3.5, gpt-4o, etc.).
  2. messages: An Array of hashes representing a conversation, which stands for our prompt.

But before we start working on our API connection, we need to briefly define what our app does!

A bit of business logic

The most obvious approach to using an LLM such as GPT-4o would be implementing a domain-specific version of their chatbot, ChatGPT. However, we will take it a step further and design the app's UI and logic with a more streamlined experience in mind.

There are many ways to implement this, but for this guide, we’ll  create a simple application that generates personalized invitation letters based on a set of user-facing inputs.

So first, let’s take a look at our database:

This diagram is based on the underlying model for interacting with GPT: a chat. A LetterCreator would represent a chatroom, whereas the Letter would represent an individual message. It may seem that the general-purpose chatbot pattern is inescapable, but the addition of this abstraction layer  helps us move into a more conventional user experience, and proves especially useful as we keep adding functionality over time.

Let’s create these models by running these in your terminal

rails g model LetterCreator name:string recipient_name:string event_name:string date_and_time:datetime location:string recipient_likes:string

rails g model Letter title:string body:string letter_creator:references

Make sure to add the association to your LetterCreator model

# app/models/letter_creator.rb

class LetterCreator < ApplicationRecord
  has_one :letter
end

And this will be the controller for our LetterCreator

# app/controllers/letter_creators_controller.rb

class LetterCreatorsController < ApplicationController
  before_action :set_letter_creator, only: %i[edit update]

  def new
    @letter_creator = LetterCreator.create!
    redirect_to edit_letter_creator_path(@letter_creator.id)
  end

  def edit
    @letter = @letter_creator.letters.last
  end

  def update
    @letter = Letter.new(letter_creator: @letter_creator)
    message_creator = MessageCreator.new(params: letter_params)
    response = message_creator.call

    ActiveRecord::Base.transaction do
      @letter_creator.update!(letter_params)
      @letter.assign_attributes(body: response)
      @letter.save!
    end

    redirect_to edit_letter_creator_path(@letter_creator.id)
  end

  private

  def set_letter_creator
    @letter_creator = LetterCreator.find(params[:id])  
  end

  def letter_params
    params.require(:letter_creator).permit(:recipient_name, :event_name, :date_and_time, :location, :recipient_likes)
  end
end

In short, on each new action we create an empty LetterCreator. We will update this instance based on the user input (letter_params). Simultaneously, we’ll generate a message with those parameters using the MessageCreator service, and we’ll save the response as a Letter.

Make sure to also add these changes to your routes.

# config/routes.rb

# ...

resources :letter_creators do
  resources :letters
end

Our edit view can look something like this (styled using Tailwind CSS)

<!-- app/views/letter_creators/edit.html.erb -->

<div class="flex flex-row justify-center w-full gap-10">
  <div class="flex flex-col">
    <%= form_with model: @letter_creator do |form| %>
      <h2 class="mb-4 text-2xl">Parameters</h2>

      <div class="flex flex-col mb-4">
        <%= form.label "Recipient name", class: "text-gray-800"%>
        <%= form.text_field :recipient_name, class: "text-sm text-gray-900 border border-gray-300 rounded-lg bg-gray-50" %>
      </div>
      <div class="flex flex-col mb-4">
        <%= form.label "Event name", class: "text-gray-800"%>
        <%= form.text_field :event_name, class: "text-sm text-gray-900 border border-gray-300 rounded-lg bg-gray-50" %>
      </div>
      <div class="flex flex-col mb-4">
        <%= form.label "Date and time", class: "text-gray-800"%>
        <%= form.datetime_local_field :date_and_time, class: "text-sm text-gray-900 border border-gray-300 rounded-lg bg-gray-50" %>
      </div>
      <div class="flex flex-col mb-4">
        <%= form.label "Location", class: "text-gray-800"%>
        <%= form.text_field :location, class: "text-sm text-gray-900 border border-gray-300 rounded-lg bg-gray-50" %>
      </div>
      <div class="flex flex-col mb-4">
        <%= form.label "Recipient likes", class: "text-gray-800"%>
        <%= form.text_field :recipient_likes, class: "text-sm text-gray-900 border border-gray-300 rounded-lg bg-gray-50" %>
      </div>

      <%= form.submit "Generate", data: { turbo_submits_with: "Generating..." }, class: "text-white bg-blue-700 hover:bg-blue-800 font-medium rounded-lg text-sm px-5 py-2.5 me-2 mb-2 cursor-pointer" %>
    <% end %>
  </div>
  <div class="flex flex-col">
    <h2 class="mb-4 text-2xl">Letter Preview</h2>
    <div class="flex flex-col max-w-xl p-6 border border-gray-300 border-solid rounded-xl">
      <div class="whitespace-pre-line">
        <%= @letter&.body.presence || "Here you will preview your letter" %>
      </div>
    </div>
  </div>
</div>

The main thing to notice here is that we’re displaying the contents of our Letter using @letter&.body.presence.

Finally, let’s create the service that will be responsible for connecting to the API and generating the message

# app/services/message_creator.rb

class MessageCreator
  def initialize(params: [])
    @client = OpenAI::Client.new
    @params = params
  end

  def call
    send_request
  end

  private

  def send_request
    response = @client.chat(parameters: default_parameters)

    response.dig('choices', 0, 'message', 'content')
  end

  def default_parameters
    {
      model: 'gpt-4o',
      messages: [
        {
          role: 'system',
          content: 'You are a helpful assistant.'
        },
        {
          role: 'user',
          content: "Write an invitation letter for an event using the following information #{@params}.
          Make sure to use recipient likes in the letter as a form of convincing the recipient to attend the event.
          Return only the contents of the letter"
        }
      ]
    }
  end
end

That was the last piece! Now, let us give it a try, shall we?

No, your internet is not slow (probably). That is how long it takes if we wait until we receive the full response before displaying it. But don’t fret; we can easily fix this using Turbo Streams. Let’s go ahead and make our app feel snappy!

Fun with Streams

Luckily, ruby-openai supports streaming out of the box just by sending  stream as a parameter and passing a Proc that will handle the completion chunks as they’re being received.

client.chat(
    parameters: {
        model: "gpt-4o",
        messages: [{ role: "user", content: "Hello!"}],
        stream: proc do |chunk, _bytesize|
            print chunk.dig("choices", 0, "delta", "content")
        end
    }
)

With that in mind, we only need to broadcast these chunks to the view and don’t forget to store the response.  We’ll handle broadcasting as a method of our Letter model, like so.

# app/models/letter.rb

class Letter < ApplicationRecord
  belongs_to :letter_creator

  def broadcast_body(target, content)
    broadcast_update_to(
      letter_creator,
      target:,
      content:
    )
  end
end

We’re setting the associated letter_creator as the channel for our stream, and later, we’ll use their dom_id as our target, which means we also need to add these lines to the view.

<!-- app/views/letter_creators/edit.html.erb -->

...

<%= turbo_stream_from(@letter_creator) %>
<p id="<%= dom_id(@letter_creator) %>" class="whitespace-pre-line">
  <%= @letter&.body.presence || "Here you will preview your letter" %>
</p>

Let’s also pass the instances of letter_creator and letter to our MessageCreator service, as they’ll be used as the target and the model instance that handles the broadcast, respectively. Modify this assignment in the update method of the LetterCreatorsController .

# app/controllers/letter_creators_controller.rb

# ...

message_creator = MessageCreator.new(params: letter_params, model: @letter, target: @letter_creator)

Lastly, we’ll need to bring all these elements together in our MessageCreator service.

# app/services/message_creator.rb

class MessageCreator
  include ActionView::RecordIdentifier

  def initialize(model:, target:, params: [])
    @client = OpenAI::Client.new
    @model = model
    @target = target
    @params = params
    @buffer = []
  end

  def call
    send_request
  end

  private

  def send_request
    @client.chat(parameters: default_parameters)

    @buffer.join('').presence
  end

  def default_parameters
    {
      model: 'gpt-4o',
      messages: [
        {
          role: 'system',
          content: 'You are a helpful assistant.'
        },
        {
          role: 'user',
          content: "Write an invitation letter for an event using the following information #{@params}.
          Make sure to use recipient likes in the letter as a form of convincing the recipient to attend the event.
          Return only the contents of the letter"
        }
      ],
      stream: handle_streaming
    }
  end

  def handle_streaming
    proc do |chunk, _bytesize|
      @buffer << chunk.dig('choices', 0, 'delta', 'content')

      body = @buffer.join('')

      @model.broadcast_body(dom_id(@target), body)
    end
  end
end

Here’s a quick summary of the changes:

  1. Include ActionView::RecordIdentifier module so we can use the dom_id method.
  2. Add model and target as parameters for the initializer. Also, initialize a buffer as an empty array.
  3. Add handle_streaming as a private method that will fill the buffer with each chunk and use it as the content we’ll broadcast to the view. Set this method as the value for the stream parameter in default_params as well.
  4. Set the stringified buffer as the return value of the service.

And that’s it! Now, whenever you generate a new letter, it should look like this

Conclusion

Let us quickly recap all the steps we took:

  1. Add and set up the ruby-openai gem
  2. Define your app's basic functionality and create the necessary models, controllers, and views.
  3. Create a service that handles the API connection (MessageCreator)
  4. Improve the user experience by using Turbo Streams.

And this is only the beginning. Not only can the app's functionality grow more complex, but there are also so many aspects of building a robust AI product. These range from prompt engineering to error handling, different ways of parsing the response, and even using other models from OpenAI! Those, however, are topics that we’ll need to explore separately.

If you wish to take a deeper look at the code or run it yourself, you can check out this repo. For now, I hope this can serve as a foundation for you to reach ever higher grounds.

READY FOR
YOUR UPCOMING VENTURE?

We are.
Let's start a conversation.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Our latest
news & insights