Skip to content

PreternaturalAI/Sideproject

Repository files navigation

Sideproject

Build all  platforms MIT License

Supported Platforms

macos   macos   macos   macos   macos

Sideproject is a toolkit designed for developers looking to quickly prototype AI applications. It provides basic, high-performance UI components for platforms like iOS and macOS, allowing for fast experimentation and development without the complexities of full-scale customization. Targeted at simplifying common development challenges, such as file handling and UI creation, Sideproject is ideal for rapid testing and iteration of AI concepts.

Sideproject is not meant for detailed customization or large-scale applications, serving instead as a temporary foundation while more personalized solutions are developed.

What Sideproject Is:

  • A toolkit for rapid prototyping of AI applications.
  • A collection of basic, high-performance UI components.
  • Provides solutions for common development challenges like file handling.
  • Offers cross-platform support for iOS, macOS, and visionOS.
  • Ideal for developers who need quick UI implementations for chat views, data lists, etc.

What Sideproject Is Not:

  • Not a general-purpose UI framework.
  • Not meant for extensive customization or large-scale applications.
  • Does not include implementations for specific services like OpenAI; these are in the AI framework.
  • Not intended for long-term use in finalized apps; serves as a placeholder for custom implementations.

Features

Main Features
📖 Open Source
🙅‍♂️ No Account Required

Requirements

Parameter Type Description
macOS 13.0+ Swift Package Manager

Installation

The Swift Package Manager is a tool for automating the distribution of Swift code and is integrated into the swift compiler.

Once you have your Swift package set up, adding Sideproject as a dependency is as easy as adding it to the dependencies value of your Package.swift or the Package list in Xcode.

dependencies: [
    .package(url: "https://github.com/PreternaturalAI/Sideproject", branch: "main")
]

Usage/Examples

To create a request to an LLM, just create a prompt, enter in the services you'd like to use (OpenAI, Claude, Gemeni, etc...) and based on the prompt Sideproject will find which one will work best for your request.

Import the framework

+ import Sideproject

Streaming

// Initializes a chat prompt with user-provided text.
let prompt = AbstractLLM.ChatPrompt(messages: [.user("PROMPT GOES HERE")])

// Creates an API client instance with your unique API key.
let openAI = OpenAI.Client(apiKey: "API KEY GOES HERE")

// Wraps the OpenAI client in a 'Sideproject' service layer for streamlined API access.
let sideproject = Sideproject(services: [openAI])

// Initiates a streaming request to the OpenAI service with the user's prompt.
let result = try await sideproject.stream(prompt)

// Iterates over incoming messages from the OpenAI service as they arrive.
for try await message in result.messagePublisher.values {
    do {
        // Attempts to convert each message's content to a String.
        let value = try String(message.content)
        // Updates a local variable with the new message content.
        self.chatPrompt = value
    } catch {
        // Prints any errors that occur during the message handling.
        print(error)
    }
}

Using GPT4 Vision (Sending Images/Files)

// Initializes an image-based prompt for the language model.
let imageSideprojectral = try PromptSideprojectral(image: image)

// Constructs a series of chat messages combining a predefined text prompt with the image literal.
let messages: [AbstractLLM.ChatMessage] = [
    .user {
        .concatenate(separator: nil) {
            PromptSideprojectral(Prompts.isThisAMealPrompt)
            imageSideprojectral
        }
    }
]

// Asynchronously sends the constructed messages to the LLM service and awaits the response.
// It specifies the maximum number of tokens (words) that the response can contain.
let completion = try await Sideproject.shared.complete(
    prompt: .chat(
        .init(messages: messages)
    ),
    parameters: AbstractLLM.ChatCompletionParameters(
        tokenLimit: .fixed(1000)
    )
)

// Extracts text from the completion response and attempts to convert it to a Boolean.
// This could be used, for example, to determine if the image is recognized as a meal.
let text = try completion._chatCompletion!._stripToText()
return Bool(text) ?? false

Demo

See demos on the Preternatural Cookbook - https://github.com/PreternaturalAI/Cookbook

Support

For support, provide an issue on GitHub or message me on Twitter.