October 19, 2025

why is nothing easy

but progress never is!

Digital Diary 2000 Pro - github page for this is coming soon, few kinds need to be worked out.

Mesh View - 3D mesh/texture viewer for the web. another one at least.

QuickScan - In-browser QR scanner that doesn't suck.

Check 'em out!

Post Date: October 19, 2025

Permalink

Ready

October 6, 2025

A New (old) look!

Windows 95 startup screen featuring the 95 logo on clouds

now featuring the 98.css library

The blog was wrapped in jdan's 98.css like the rest of the website, including the editor. Permalinks now have their own pages, and there is a nifty toolbar. Updates to GitHub coming soonish.

Post Date: October 6, 2025

Permalink

Ready

July 11, 2025

Chevy C/10

Downtown Brooklyn

Post Date: July 11, 2025

Permalink

Ready

July 5, 2025

Freedom Bulbs

Freedom Bulbs

Brooklyn, NY

Post Date: July 5, 2025

Permalink

Ready

May 7, 2025

Basic Handwritten Number Recognition Network

Training a Convolutional Neural Network (CNN) on handwritten digits is a classic beginner project using the MNIST dataset. Here's a clear, minimal walkthrough using Python, PyTorch, and your CUDA-enabled GPU.

Download the files here but still read through the instructions:

0. Create a New Virtual Environment

bash

cd /home/user/directory/etc
python -m venv venv
source venv/bin/activate

1. Install Dependencies

Ensure Python is installed (>=3.8). Then, install PyTorch with CUDA support:

bash

pip install torch torchvision matplotlib

This little maneuver is going to cost you 5GiB++

Check if GPU is detected in Python:

python

import torch
print(torch.cuda.is_available())  

This should print 'True'

2. Load the MNIST Dataset

This is a publicly available dataset and will be loaded when needed, it does not have to be download separately in preparation.

python

import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
from torch.utils.data import DataLoader

transform = transforms.Compose([transforms.ToTensor()])

train_set = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
test_set = datasets.MNIST(root='./data', train=False, download=True, transform=transform)

train_loader = DataLoader(train_set, batch_size=64, shuffle=True)
test_loader = DataLoader(test_set, batch_size=1000, shuffle=False)

3. Define a Simple CNN

python

class CNN(nn.Module):
    def __init__(self):
        super().__init__()
        self.net = nn.Sequential(
            nn.Conv2d(1, 16, 3, 1),
            nn.ReLU(),
            nn.MaxPool2d(2),
            nn.Conv2d(16, 32, 3, 1),
            nn.ReLU(),
            nn.MaxPool2d(2),
            nn.Flatten(),
            nn.Linear(5*5*32, 128),
            nn.ReLU(),
            nn.Linear(128, 10)
        )

def forward(self, x):
    return self.net(x)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = CNN().to(device)

4. Train the Model

python

criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

for epoch in range(5):  # 5 epochs is enough for MNIST
    model.train()
    for images, labels in train_loader:
        images, labels = images.to(device), labels.to(device)
        optimizer.zero_grad()
        output = model(images)
        loss = criterion(output, labels)
        loss.backward()
        optimizer.step()
    print(f"Epoch {epoch+1} complete")

5. Evaluate Accuracy

python

correct = 0
total = 0
model.eval()
with torch.no_grad():
    for images, labels in test_loader:
        images, labels = images.to(device), labels.to(device)
        outputs = model(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print(f"Test Accuracy: {100 * correct / total:.2f}%")

6. Running the Entire Thing

If you are cutting and pasting, you may be getting some indentation and other errors. In this case, download the complete file here and run using the commands below.

  • create some images of handwritten numbers
  • move them into the mnist directory
  • the script is looking for an image named number.png

bash

cd /home/user/directory/etc
python mnist_train_and_test.py

You should start seeing the process run through 500 epochs, and output a test accuracy. (This number can be adjusted on line 49)

7. Proving the Principle Again

At the end of the above test, you should see a prediction of the number in the picture along with an accuracy certainty. The script also saved the model as mnist_test_net.pth so it can be reused for number prediction for other pictures. Try your other examples from the previous step.

Supplied is a script that will load up a selected model and selected image and return a result.

Post Date: May 7, 2025

Permalink

Ready

May 3, 2025

meetings at microsoft

How meetings go at microsoft.

never a dull day at Redmond

Post Date: May 3, 2025

Permalink

Ready

April 12, 2025

snakeShaker ๐Ÿ

Don't get bitten by risky Python packages!

snakeShaker is a simple web utility designed to give you a quick first glance at the potential risks associated with package imports in your Python code snippets. Paste your code, give it a shake, and see what slithers out! It primarily focuses on Python (PyPI) but has basic awareness of JavaScript (npm) and PHP (Packagist) imports too.

Live Demo Available

snakeShaker screenshot

What's This Hiss About? (Features)

  • Identifies Imports: Uses regular expressions to find import and from ... import statements in Python code, plus basic require/import in JS and use statements in PHP.
  • Registry Checks: Queries public package registries (PyPI, npm Registry, Packagist) and the wise old Libraries.io API to verify package existence.
  • Basic Metadata: Fetches information like the package author/maintainers ("handlers") and the date the package first slithered online ("First Seen").
  • Heuristic Safety Assessment: Provides a very basic, heuristic (read: educated guess!) safety likelihood based primarily on the package's age and registry status. Think of it as trying to tell a harmless garter snake from a potential viper based on first impressions โ€“ not a definitive identification!
  • Built-in Awareness: Recognizes common Python and Node.js built-in modules (no need to fear the os or fs modules!).
  • Simple Interface: Just paste, click, and view the results in a clear table.

How to Use

  1. Navigate to the lair: [https://snake.timnetworks.net](https://snake.timnetworks.net)
  2. Paste your Python (or JS/PHP) code snippet into the text area.
  3. Hit the "shake it before you play with it" button.
  4. Watch it hiss (analyze) for a moment.
  5. Examine the "Snakes in the Grass" results table for insights on your imports.

Tech Stack / Ingredients

  • Frontend: Vanilla JavaScript (ES6+) for DOM manipulation and API interaction.
  • Backend: PHP for processing the code, orchestrating API calls, and basic safety heuristics.
  • Styling: CSS with a Solarized Light-inspired theme.

APIs Used

  • PyPI JSON API
  • npm Registry API
  • Packagist API
  • Libraries.io API (Requires API Key - see api.php)

Important Hisses (Limitations & Disclaimer)

๐Ÿ Anti-Venom Not Included! Please Read Carefully! ๐Ÿ

  • NOT a Security Scanner: This tool is NOT a comprehensive security analysis tool. It uses relatively simple pattern matching (regex) and basic checks (like package age). It's intended as a preliminary check or a curiosity tool, not a guarantee of security or correctness.
  • False Positives/Negatives: Complex code structures, aliased imports (import pandas as pd), dynamic imports/requires, or unconventional formatting might confuse the parser, leading to missed packages or incorrect identification.
  • Safety is Heuristic: The "Safety Likelihood" is purely based on easily obtainable data like creation date and existence on the registry. Age does NOT equal safety. Malicious packages can exist for years, and brand new packages can be perfectly safe. This assessment is just one data point and should be taken with a large grain of salt.
  • API Reliance: The accuracy and completeness of the results depend entirely on the availability and data provided by the external PyPI, npm, Packagist, and Libraries.io APIs. Downtime or rate limiting on their end will affect results.
  • Use Your Brain: Always perform your own due diligence. Review the source code of dependencies, check for known vulnerabilities (using dedicated tools like pip-audit, npm audit, etc.), understand the package's reputation, and consider the context before trusting any third-party code.

Post Date: April 12, 2025

Permalink

Ready

April 7, 2025

whatJacket ๐Ÿงฅ v0.8.2

A simple PHP web application that suggests clothing based on the weather forecast for a given US ZIP code and a selected activity category.

Live Demo Available here

Screenshot of WhatJacket app

Features

  • Fetches hourly forecast data for the immediate future from the NOAA/NWS API.
  • Geocodes US ZIP codes to latitude/longitude using the Nominatim (OpenStreetMap) API.

Clothing Suggestion Logic

  • Temperature Bands: Defined ranges (e.g., Hot, Mild, Cold, Frigid) based on current temperature.
  • "Feels Like" Temperature: Displayed for user context.
  • Weather Conditions: Identified using keywords (rain, snow, sunny, windy, severe, etc.) from the forecast text.
  • Wind Speed: Used for flags and specific item logic (e.g., umbrella usability).
  • Precipitation Probability: Used for selecting appropriate rain/snow gear.
  • Selected Activity Category: Filters items suitable for Casual, Hiking, Professional, etc.
  • Item Properties: Considers thermal_value, water_resistance, wind_resistance, sun_protection, special_conditions defined for each clothing item.
  • Layered Outfit Suggestions: Selects a base layer, bottoms, footwear, and appropriate mid/outer layers and accessories based on conditions.
  • Prioritization Logic: Favors condition-specific gear (raincoats in rain, windbreakers in wind) and thermally appropriate items. Includes specific logic (e.g., mandatory undershirt for Professional dress shirts).

User Interface

  • Displays results with item images, current conditions summary, and forecast details.
  • Features a dynamic background image based on the primary weather condition.
  • Includes a Fahrenheit/Celsius toggle.
  • Vertically stacked form elements for improved usability across devices.
  • Automatically hides the main input form when results are displayed.
  • Provides a "Change Activity" form on the results page for quick updates.
  • Persistence: Remembers the last used ZIP code, activity category, and temperature unit using PHP sessions.
  • Debug Tools: Includes separate pages (clothing-debug.php, outfit-simulator.php) for inspecting clothing items and simulating outfit generation under various conditions.

Tech Stack / Ingredients

  • Backend: PHP (7.4+ recommended)
  • Frontend: HTML and CSS
  • NOAA/NWS Weather API (api.weather.gov) for forecast data.
  • Nominatim Geocoding API (nominatim.openstreetmap.org) for ZIP code lookup.

Data Sources & Terms

  • Weather Data: Provided by the [NOAA/NWS API](https://www.weather.gov/documentation/services-web-api). Requires adherence to their Terms of Service, including the use of a valid User-Agent identifying your application and contact information (see config.php). Failure to provide a valid User-Agent may result in your access being blocked.
  • Geocoding Data: Provided by [Nominatim](https://nominatim.org/) using [OpenStreetMap](https://www.openstreetmap.org/copyright) data. Requires attribution as per the [Nominatim Usage Policy](https://operations.osmfoundation.org/policies/nominatim/). This attribution is automatically included in the site footer.

This application attempts full compliance with the terms of service for both APIs.

Setup

  • Clone or download this repository to your web server.
git clone https://github.com/timnetworks/whatJacket.git
  1. Ensure your web server (e.g., Apache, Nginx) is configured to run PHP. The script uses file_get_contents with stream contexts for API calls, which typically requires allow_url_fopen = On in your php.ini. If this is disabled for security reasons, you might need to refactor API calls to use the php-curl extension.
  2. Place the project files (index.php, functions.php, config.php, style.css, script.js, *.php debug pages, img/ directory, etc.) in your web server's document root or a suitable subdirectory.
  3. IMPORTANT: Open config.php and update the API_USER_AGENT constant with your actual application name/version and contact information (email or website) as required by the NOAA API terms.
// Example - REPLACE WITH YOUR DETAILS:
define('API_USER_AGENT', 'MyWhatJacketFork/1.0 (myemail@example.com; https://mywebsite.com/whatjacket)');
  1. Make sure the img/ directory structure exists (img/backgrounds/, img/icons/) and contains the image files referenced in config.php. Check file permissions if images are not loading. Create a default placeholder image at img/placeholder.png.
  2. Access the index.php file via your web browser.

Configuration (`config.php`)

Most application settings are controlled within config.php:

  • API_USER_AGENT: Must be set correctly for NOAA API compliance.
  • NOAA_API_BASE_URL, GEOCODING_API_BASE_URL: API endpoints.
  • APP_VERSION, APP_TITLE, APP_NAME_SHORT, APP_URL: Basic application info.
  • LOGO_IMAGE_PATH, OG_IMAGE_PATH: Paths to branding images.
  • DEFAULT_TEMP_UNIT, DEFAULT_CATEGORY: User defaults.
  • THEME_COLOR, BACKGROUND_COLOR: PWA theme settings.
  • CATEGORIES: Defines available activity categories, their labels, and icons (text/emoji and image path).
  • TEMP_BANDS: Defines temperature ranges (in C and F) and assigns a base target_thermal_score (used implicitly by selection logic).
  • CONDITION_KEYWORDS: Keywords used to identify weather conditions from forecast text.
  • CONDITION_THRESHOLDS: Values like WINDY_THRESHOLD_MPH, RAIN_PROBABILITY_THRESHOLD, UMBRELLA_MAX_WIND_MPH.
  • FOOTER_LINKS: Links displayed in the footer.
  • FORECAST_BACKGROUNDS: Mapping of primary condition keys to background images.
  • SIMPLE_CONDITION_DISPLAY: User-friendly names for primary weather conditions.
  • TYPE_TO_DISPLAY_GROUP_MAP: Maps clothing item types to display sections (Tops, Bottoms, etc.).
  • CLOTHING_ITEMS: The core database of clothing items.

Each item defines:

  • name, type, layer, category.
  • temp_bands: Array of temperature bands where the item is suitable.
  • thermal_value: Approximate warmth score (0=none, 4=very heavy). Used for sorting/prioritization.
  • water_resistance, wind_resistance, insulation, breathability, sun_protection.
  • special_conditions: Array of condition keys (e.g., 'rainy', 'windy') required or preferred for this item.
  • img, img_fallback: Paths to image assets. Ensure these paths are correct relative to index.php.

Debug Pages

  • clothing-debug.php: Displays a grid of all items defined in CLOTHING_ITEMS, showing their properties and checking the status (OK, Warning, Error) of their primary and fallback image files.
  • outfit-simulator.php: Allows you to manually select a temperature band, activity category, and specific weather conditions (windy, rainy, sunny, etc.) to test the output of the select_clothing() function and see the generated outfit. Includes the simulated forecast data used for the test.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

License

This project is published under the [MIT License](https://opensource.org/licenses/MIT).

Acknowledgements

  • Thanks to the NOAA/NWS and OpenStreetMap/Nominatim projects for providing the essential data APIs.
  • Base code generation and refinement assisted by Google Gemini.

Post Date: April 7, 2025

Permalink

Ready

April 1, 2025

simpleSubwayStatus

Simple MTA status alerts for your line of choice, with filtering and display options.

MTA Status Page Screenshot

See the project on Github!

Simple Features

  • Pulls real-time alerts from MTA Subway Alerts feed.
  • Filter by Line Group: Easily filter alerts by standard MTA line groupings (e.g., 1/2/3, A/C/E, B/D/F/M, etc.) including Shuttles and SIR.
  • Toggle Future Alerts: Choose to show or hide alerts scheduled for a future date/time.
  • Clear "No Alerts" Indication: Displays a message and graphic when no alerts match your current filter settings.
  • Expandable Details: Click any alert summary to view the full description.
  • Relative Timestamps: See how long ago an alert was issued or when a future alert is scheduled to start.
  • Mobile and Desktop Viewports: Responsive design for various screen sizes.
  • Fixed Controls: Refresh, Show All, and Future Alert toggle are always accessible at the bottom.
  • Solarized whether you like it or not.

Simple Prerequisites

  • Web server with PHP support (PHP 7.0 or higher recommended, cURL extension recommended for better reliability).
  • A modern web browser.

Installation

  1. Clone this repository or download the files to your web server's document root (or a subdirectory).
  2. Ensure the web server has read permissions for all files (index.html, style.css, mtalogo.png, no_alerts.png) and execute permissions for status.php.
  3. Add Image: Place an image named no_alerts.png in the same directory. This will be displayed when no alerts are found.
  4. (Optional) MTA API Key: If you have an MTA API key, edit status.php and add it to the cURL or file_get_contents request headers for potentially better rate limits or access.
  5. Access the index.html page through your web browser via the web server.

File Structure

.
โ”œโ”€โ”€ index.html
โ”œโ”€โ”€ style.css
โ”œโ”€โ”€ status.php
โ”œโ”€โ”€ mtalogo.png
โ”œโ”€โ”€ no_alerts.png
โ””โ”€โ”€ README.md

Usage

Viewing Alerts

  • All current and future alerts are shown by default on page load.
  • Use the filter buttons at the top to show alerts only for specific line groups (e.g., clicking the 'ACE' button shows alerts affecting A, C, or E lines). Shuttles and SIR are filtered individually.
  • Click on an alert summary (the card itself) to expand and view the detailed description. Click again to collapse.
  • Alerts show when they were issued or when they are scheduled to start.

Controls (Bottom Bar)

  • Refresh: Click the circular arrow button to fetch the latest alerts immediately.
  • Show All: Click this button to clear any active line filters and display all alerts (respecting the Future Alerts toggle).
  • Show Future Toggle: Use the switch to include or exclude alerts that are scheduled to begin at a future time.

Technical Details

PHP Proxy (`status.php`)

  • Acts as a simple backend to fetch data from the official MTA alert feed (https://api-endpoint.mta.info/Dataservice/mtagtfsfeeds/camsys%2Fsubway-alerts.json).
  • This avoids potential CORS (Cross-Origin Resource Sharing) issues that would occur if the browser tried to fetch directly from mta.info.
  • Includes basic error handling and timeout settings. Recommends using the cURL extension if available.
  • Sets Cache-Control headers to discourage aggressive caching of the status data.

Data Structure & Filtering

  • The application parses the JSON response from the MTA feed. JavaScript handles filtering based on:
  • Selected line groups stored in activeFilters.
  • The state of the "Show Future Alerts" toggle (activeFilters.showFuture).
  • The active_period[0].start timestamp of each alert.
  • The "No Alerts" message is shown dynamically when the filtering results in zero visible alert cards.

Line Grouping and Colors

  • Lines are grouped and ordered based on common MTA conventions:
  • 1, 2, 3 (Red)
  • 4, 5, 6 (Green)
  • 7 (Purple)
  • A, C, E (Blue)
  • G (Lime Green)
  • B, D, F, M (Orange)
  • N, Q, R, W (Yellow)
  • J, Z (Brown)
  • L (Gray)
  • S (42nd St Shuttle - Gray)
  • H / sR (Rockaway Shuttle - Gray)
  • FS / sF (Franklin Ave Shuttle - Gray)
  • SI (Staten Island Railroad - Blue)
  • Colors used are based on official MTA branding. Filtering applies to all lines within a clicked group (except for individual shuttles/SIR).

See the project on Github!

Post Date: April 1, 2025

Permalink

Ready

February 12, 2025

sLLeper - Self-Hosted LLM Server

Part Two - The Install

In this guide, we will put together a small LLM server for basic text image and audio inference. The guide is broken into sections for ComfyUI, Ollama, and Open-WebUI, as well as some by-the-way instructions to tie it all together.

Skip to: operating system install, comfyui, ollama, openwebui, customizations, updates

Install Operating System and Software

  • Fresh install of Linux Mint 22.1 MATE
  • Installed to a 256GiB SATA SSD M.2 drive, automatic formatting with LVM partitioning
  • I chose to install the optional media codecs although these should not be required
  • Computer got a DHCP IP address, we'll leave that for now
sudo apt update -y

sudo apt install openssh-server tmux htop nvtop
  • Now let's see if ssh is ready for us to log in from a remote computer.
sudo systemctl status ssh
  • the output should look something like this:
โ— ssh.service - OpenBSD Secure Shell server
     Loaded: loaded (/usr/lib/systemd/system/ssh.service; disabled; preset: enabled)
     Active: active (running) since Wed 2025-02-12 17:42:26 EST; 5min ago
TriggeredBy: โ— ssh.socket
       Docs: man:sshd(8)
             man:sshd_config(5)
    Process: 3075 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS)
   Main PID: 3077 (sshd)
      Tasks: 1 (limit: 18994)
     Memory: 2.7M (peak: 19.1M)
        CPU: 106ms
     CGroup: /system.slice/ssh.service
             โ””โ”€3077 "sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups"
  • Now we can log into the system from our main computer, and unplug the screen and peripherals
sudo apt update -y

sudo apt upgrade -y
  • This should give us an updated system, ready to install our software.
  • It may take a litle white, this is a good break point for a coffee
  • Once done, not a bad idea to send a reboot
  • Now we install some dependencies
  • First up is Python 3 (3.12 preferred, default as of this writing)
sudo apt install python-is-python3 python3-dev python3-venv -y
  • Next, we install git so we can clone the software install directories
sudo apt install git -y
  • Lastly, we install nvidia-firmware-5xx-server (in my case, 550.90.07) and associated utils
sudo apt install nvidia-firmware-550-server-550.90.07 nvidia-utils-550-server -y
  • Then activate them in Driver Manager because I can't figure out the cli command for that -- it takes a while
  • Restart again when prompted
  • Once you have restarted and logged back in, run nvidia-smi to check that the video card is being recognized
nvidia-smi
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.06              Driver Version: 555.42.06      CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1080        Off |   00000000:01:00.0  On |                  N/A |
|  0%   33C    P5             12W /  200W |      73MiB /   8192MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      1004      G   /usr/lib/xorg/Xorg                             69MiB |
+-----------------------------------------------------------------------------------------+
  • We should have enough to get after the meat and potatoes now.

----

ComfyUI - Node based multi-modal workdesk

The most powerful and modular diffusion model GUI and backend.

https://github.com/comfyanonymous/ComfyUI

  • Navigate to your home folder.
cd ~/
  • Next, clone the ComfyUI directory into the home folder
git clone https://github.com/comfyanonymous/ComfyUI
Cloning into 'ComfyUI'...
remote: Enumerating objects: 17175, done.
remote: Counting objects: 100% (67/67), done.
remote: Compressing objects: 100% (47/47), done.
remote: Total 17175 (delta 41), reused 20 (delta 20), pack-reused 17108 (from 3)
Receiving objects: 100% (17175/17175), 57.65 MiB | 7.76 MiB/s, done.
Resolving deltas: 100% (11463/11463), done.
cd ComfyUI
  • Create a Virtual ENVironment directory to work out of
python -m venv venv
  • Initialize the virtual environment
source venv/bin/activate
  • The shell should now show (venv) before your username@host
  • Install the pip requirements
pip install -r requirements.txt
  • This will download the needed libraries but keep them confined to this venv so as not to mess with other Python app dependencies and versions of said dependencies
  • Next up is the nVidia CUDA Toolkit. The following command with install version 12.8 for compatibility through the RTX-50x0 series of graphics adapters.
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
  • Once the download is ready, we can test out the runtime. In the ~/ComfyUI directory:
python main.py
  • If it's running, we're done with ComfyUI. Proceed to the next section. If it does not run, we may need to uninstall torch libs and install again
pip uninstall torch torchaudio torchvision pip install -r requirements
  • Then try running ComfyUI again
python main.py
  • It should load up and show the IP address to access it by. We can close out CTRL+C and continue from here.
  • Create a comfy.sh file in /usr/local/bin/comfy.sh and make it executable
sudo touch /usr/local/bin/comfy.sh

sudo chmod +x /usr/local/bin/comfy.sh

sudo nano /usr/local/bin/comfy.sh
  • Paste the following:
#!/bin/bash
cd ~/ComfyUI
source venv/bin/activate
python main.py --listen 0.0.0.0 --preview-method auto
  • Press CTRL+O, ENTER, & CTRL+X to save and exit.
  • Now lets make the app into a system service
  • Create a comfyui.service file
sudo touch /lib/systemd/system/comfy.service

sudo nano /lib/systemd/system/comfy.service
  • Paste the following config for the ComfyIU service. It will automaticaly start with the computer and restart if it crashes.

[Unit]
Description=ComfyUI Daemon
After=network.target

[Service]
User=user
Group=user

ExecStart=/usr/local/bin/comfy.sh

Restart=Always
RestartSec=10

StandardOutput=journal
StandardError=journal

KillSignal=SIGTERM

[Install]
WantedBy=default.target
  • Press CTRL+O, ENTER, & CTRL+X to save and exit.
  • Make sure to replace the User=user and Group=user with the user you set up.
  • Reload daemon and start the service
sudo systemctl reload-daemon

sudo systemctl start comfyui.service

sudo systemctl status comfy.service
  • It should display the service running like so:
โ— comfy.service - ComfyUI Daemon
     Loaded: loaded (/usr/lib/systemd/system/comfy.service; disabled; preset: enabled)
     Active: active (running) since Wed 2025-02-12 20:37:45 EST; 2s ago
   Main PID: 2500 (comfy.sh)
      Tasks: 8 (limit: 18992)
     Memory: 294.5M (peak: 294.5M)
        CPU: 2.542s
     CGroup: /system.slice/comfy.service
             โ”œโ”€2500 /bin/bash /usr/local/bin/comfy.sh
             โ””โ”€2501 python main.py --listen 0.0.0.0 --preview-method auto

Feb 12 20:37:45 pavilion systemd[1]: Started comfy.service - ComfyUI Daemon.
Feb 12 20:37:47 pavilion comfy.sh[2501]: Checkpoint files will always be loaded safely.
Feb 12 20:37:47 pavilion comfy.sh[2501]: Total VRAM 8104 MB, total RAM 15905 MB
Feb 12 20:37:47 pavilion comfy.sh[2501]: pytorch version: 2.6.0+cu124
Feb 12 20:37:47 pavilion comfy.sh[2501]: Set vram state to: NORMAL_VRAM
Feb 12 20:37:47 pavilion comfy.sh[2501]: Device: cuda:0 NVIDIA GeForce GTX 1080 : cudaMallocAsync
  • ComfyUI should now be accessible via web browser at http://ip.address.of.computer:8188

----

Ollama - local LLM inferencing

  • This is a single command that pulls a .sh file and runs it on your computer
  • Warning This is not advised except with reputable sources
curl -fsSL https://ollama.com/install.sh | sh
  • Manual installation instructions available here: https://github.com/ollama/ollama/blob/main/docs/linux.md
  • Installation will proceed, and initialize the ollama.service which automatically starts and restarts on crash.
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service โ†’ /etc/systemd/system/ollama.service.
>>> NVIDIA GPU installed.
  • Check if its active
ollama ps
  • There won't be any models but the output will contain some columns.
NAME ID SIZE PROCESSOR UNTIL
  • systemctl status ollama.service should also display the status for us.
โ— ollama.service - Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: enabled)
     Active: active (running) since Wed 2025-02-12 21:10:56 EST; 1min 47s ago
   Main PID: 3229 (ollama)
      Tasks: 10 (limit: 18992)
     Memory: 14.8M (peak: 24.6M)
        CPU: 147ms
     CGroup: /system.slice/ollama.service
             โ””โ”€3229 /usr/local/bin/ollama serve

Feb 12 21:10:56 pavilion ollama[3229]: [GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Serv>
Feb 12 21:10:56 pavilion ollama[3229]: [GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Serv>
Feb 12 21:10:56 pavilion ollama[3229]: [GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Serv>
Feb 12 21:10:56 pavilion ollama[3229]: [GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Serv>
Feb 12 21:10:56 pavilion ollama[3229]: time=2025-02-12T21:10:56.395-05:00 level=INFO source=routes.go:1238 msg="Listening on 1>
Feb 12 21:10:56 pavilion ollama[3229]: time=2025-02-12T21:10:56.395-05:00 level=INFO source=routes.go:1267 msg="Dynamic LLM li>
Feb 12 21:10:56 pavilion ollama[3229]: time=2025-02-12T21:10:56.396-05:00 level=INFO source=gpu.go:226 msg="looking for compat>
Feb 12 21:10:56 pavilion ollama[3229]: time=2025-02-12T21:10:56.533-05:00 level=INFO source=types.go:131 msg="inference comput>
Feb 12 21:11:31 pavilion ollama[3229]: [GIN] 2025/02/12 - 21:11:31 | 200 |      53.359ยตs |       127.0.0.1 | HEAD     "/"
Feb 12 21:11:31 pavilion ollama[3229]: [GIN] 2025/02/12 - 21:11:31 | 200 |     126.764ยตs |       127.0.0.1 | GET      "/api/ps"
  • We should be ready to proceed to installation of Open WebUI now.

----

Open WebUI - browser based LLM Chat Interface (like ChatGPT)

Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution.

https://docs.openwebui.com

  • First, create a folder into which we will install it
cd ~/

mkdir .open-webui
  • Then create a venv, but first log out of the other one
deactivate
  • You should no longer see (venv) before the username@host prompt
python -m venv venv
  • Then activate it
source venv/bin/activate
  • In the new venv, install open-webui using pip
pip install open-webui
  • This will take a couple minutes. Good time for another coffee.
  • Once finished, launch it to let it download more stuff
open-webui serve
  • Once all the files are loaded, it should be accessbile in the browser as well at http://ip.address.of.computer:8080
  • On first login, it will prompt you to create an admin account.
  • Now, let's register it as a service
sudo touch /lib/systemd/system/openwebui.service

sudo nano /lib/systemd/system/openwebui.service
  • Paste the following config in there
[Unit]
Description=Open WebUI Daemon
After=network.target

[Service]
User=user
Group=user

ExecStart=/usr/local/bin/openweb.sh

Restart=Always
RestartSec=10

StandardOutput=journal
StandardError=journal

KillSignal=SIGTERM

[Install]
WantedBy=default.target
  • Create a .sh file and edit it and make it executable
sudo touch /usr/local/bin/openweb.sh

sudo nano /usr/local/bin/openweb.sh

chmod +x /usr/local/bin/openweb.sh
  • Paste the following
#!/bin/bash
cd ~/.open-webui
source venv/bin/activate
open-webui serve
  • Reload the systemctl daemon and start Open WebUI
sudo systemctl reload-daemon

sudo systemctl start openwebui
  • It should now be running like the others, and restart as needed
  • Open the app in a new tab with port 8080

----

Inviting the Models

What kind of party is it without any models? First, let's download an extension to ComfyUI to allow us to conveniently pull down models into the correct directories.

ComfyUI

  • Navigate to ~/ComfyUI/custom_nodes and run the following command to clone the ComfyUI Manager.
cd ~/ComfyUI/custom_nodes

git clone https://github.com/ltdrdata/ComfyUI-Manager comfyui-manager

sudo systemctl restart comfy.service
  • Once restarted, refresh your browser using F5 or CTRL+R, and you should see a blue Manager button
  • From there, check the Custom Nodes Manger and Model Manager for the desired weights.
  • You may also run software updates for ComfyUI (which is developed at a blistering pace, and can have multiple updates per week) as well as any custom nodes downloaded through workflows.

Open WebUI

  • Open the admin page by clicking on the user icon at the top right and choosing Admin Panel
  • Choose Settings from the tabs at the top
  • Choose Models from the resulting menu
  • Press the Manage Models icon at the top right
  • Enter a model name such as starcoder:1b into the field under Pull a model from Ollama.com
  • You should see a progress bar as the model downloads
  • It should be ready in a New Chat or to edit in Workspace
  • Now go to Images in the Admin Panel menu
  • For ComfyUI Base URL enter http://127.0.0.1:8188 to point it at the localhost (or enter the LAN IP it got via DHCP)
  • Each chat reply will now have a button to generate an image based on the chat response. However, the ComfyUI workflow has to be set up for this.

----

Keeping software up to date

  • System sofware update
sudo apt update -y && sudo apt upgrade -y
  • ComfyUI update: best done from the ComfyUI-Manager addon
  • Ollama update: run the script again
curl -fsSL https://ollama.com/install.sh | sh
  • Open WebUI update (once inside the appropriate venv)
pip install --upgrade open-webui

----

A word of caution

This guide details a non-production environment, presumably on a home network or disconnected from the internet altogether. Details for HTTPS and/or multi-user access can be found on each software package's installation pages.

Post Date: February 12, 2025

Permalink

Ready