A Python script that enables you to chat with an intelligent assistant (LLaMA) in your terminal. The script uses the ollama library to interact with the LLaMA model and provide a user-friendly interface.
Go to file
Hayden Johnson 96ffcae953 feat: Begin refactor into class structure
This commit introduces a significant refactor of the AI assistant's codebase, transitioning from a procedural approach to an object-oriented design. This change improves modularity, maintainability, and extensibility.

Key changes include:

- Introduction of classes: AIAssistant, CommandLineParser, InputHandler, and CommandParser.
- Encapsulation of core functionality within the AIAssistant class.
- Improved input handling with separation of interactive and piped input.
- Implementation of command parsing for actions like saving, clearing, and exiting.
- Refactored history management for persistent conversation storage.
2025-04-10 22:19:01 -07:00
.gitignore Update gitignore 2024-09-25 11:58:55 -07:00
assistant.py feat: Begin refactor into class structure 2025-04-10 22:19:01 -07:00
README.md Update readme 2024-09-27 09:54:53 -07:00
refactored.py feat: Begin refactor into class structure 2025-04-10 22:19:01 -07:00
requirements.txt add requirements.txt 2025-03-16 12:10:15 -07:00

Assistant

A terminal-based chat interface with an intelligent assistant.

Overview

This script provides a simple way to interact with an AI assistant in your terminal. It uses the Ollama API to generate responses and can handle markdown code blocks, syntax highlighting, and shell commands.

Features

  • Terminal-based chat interface
  • Supports markdown code blocks
  • Syntax highlighting using Pygments
  • Handles shell commands
  • Can copy highlighted code blocks to clipboard
  • Allows follow-up questions with --follow-up argument
  • Customizable model and temperature with --model and --temp arguments

Usage

Command Line Arguments

The script supports the following command line arguments:

  • --follow-up: Ask a follow-up question when piping in context
  • --copy: Copy a codeblock if it appears
  • --shell: Output a shell command that does as described
  • --model: Specify model (default: llama3.1:8b-instruct-q8_0)
  • --temp: Specify temperature (default: 0.2)
  • --host: Specify host of Ollama server

Piped Input

You can pipe input to the script by running it without any arguments. The script will read from standard input and handle it as a single question.

Non-Piped Input

Run the script with no arguments to enter interactive mode. Type your questions or commands, and the assistant will respond accordingly.

Installation

This script requires Python 3.x and the Ollama API library (ollama-api). You can install these dependencies using pip:

pip install ollama-api

You also need to set up an Ollama server on your local machine. Please refer to the Ollama documentation for instructions.

Running the Script

Run the script with Python:

python assistant.py