Logo Ray's Blog
  • Home
  • About
  • News
  • Publications
  • Education
  • More
    Experiences
  • Posts
  • Notes
  • Dark Theme
    Light Theme
Logo Inverted Logo
  • Posts
  • AI
    • Infrastructure
      • Guides On Choosing Deep Learning Server
    • LLM
      • Asksage
      • Ollama
      • Ollama
    • PyTorch
      • Learning PyTorch Part I
      • Pytorch Distributed Data Parallel With Model Parallel in an HPC Environment
  • Tools
    • NeoVim
    • An Intro to a CLI Password Management: Pass
    • Exercism Cli Shortcut
    • Random Docker/Podman tips
  • HPC
    • ALCF
      • Distributed Training
      • QWen2.5-VL
  • Linux
    • Manage Users in Linux
    • Setup Ubuntu 22.04
  • Embedded Systems
  • Programming
    • C++
      • C++ Enum Pattern
    • Competitive Programming
      • How to Learn Programming
      • Mistakes I Have Made
      • TopCoder
        • HoleCakeCuts topcoder SRM411 div2 level3
        • InfiniteSequence topcoder SRM413 div2 level3
        • StringsAndTabs topcoder SRM412 div2 level3
        • TeleportsNetwork topcoder SRM409 div2 level3
    • Design Patterns
      • Object-Oriented Analysis
      • Object-Oriented Design Principles
    • Python
      • Python Conditional Timeit Decorator
Hero Image
Configure Ollama

Introduction When running Ollama on a server, you may need to configure various aspects of the deployment - from changing model storage locations to adjusting performance parameters. This guide covers common server-side administration tasks: Understanding Ollama environment variables and configuration options Changing the model storage location (useful for disk space management) Configuring Ollama via systemd service files Migrating existing models to new storage locations For information on connecting to a remote Ollama instance from your local machine, see Connecting to Remote Ollama Servers with SSH Tunneling.

    Monday, November 3, 2025 | 2 minutes Read
    Hero Image
    Asksage Python API Setup

    AskSage is a secure and extensible generative AI platform designed for government and commercial organizations, with a particular focus on the public sector and regulated industries. It provides a way for teams to leverage various large language models (LLMs) and other AI capabilities in a secure and compliant environment. Key features of AskSage include: Multi-model access: Support for various LLMs including GPT, Claude, Gemini, and specialized government-approved models Enterprise security: SOC 2 compliance, data encryption, and air-gapped deployment options Audit trails: Complete logging and monitoring for regulatory compliance Custom integrations: API access for embedding AI capabilities into existing workflows Content filtering: Built-in safety measures and content moderation A comprehensive example is provided by AskSage here.

      Saturday, September 6, 2025 | 5 minutes Read
      Hero Image
      Connecting to Remote Ollama Servers with SSH Tunneling

      Introduction Ollama is a tool for running large language models (LLMs) locally. When you have Ollama running on a remote server (e.g., a GPU-enabled workstation or HPC cluster), you can access it securely from your local machine using SSH tunneling. This guide demonstrates how to: Create an SSH tunnel to a remote Ollama instance Test the connection and query available models Use Ollama with the OpenAI-compatible API for seamless integration with existing code Getting Started Setting up the SSH Tunnel First, configure your environment variables. We’ll map the remote Ollama service (default port 11434) to a local port 11435 to avoid conflicts with any local Ollama instance.

        Sunday, June 15, 2025 | 3 minutes Read
        Hero Image
        Pytorch Distributed Data Parallel With Model Parallel in an HPC Environment

        Distributed Data Parallel with Model Parallel in an HPC environment Objective This tutorial is on : how to separate a model and put it on multiple GPUs. how to train such model in a distributed data parallel fashion. how to use torch.distributed.launch and create a slurm job script for HPC environment. Model Parallel (Pipelining) When a model is too large to fit in one GPU device, we can cut it in half and put each part on different GPU device. To do this, we need to partition the model into “head” and “tail” and specify which device to put them on. In the following toy example, we simply put the first part in to current GPU device and the second part to the next device.

        • Documentation
        Thursday, December 12, 2019 | 5 minutes Read
        Hero Image
        Guides On Choosing Deep Learning Server

        Introduction Choosing the right GPU server for deep learning is the first problem presented to the research teams among industry and academia. This article is to introduce a few tips in picking the right hardware for your team. If the purpose of the server is mainly for development, a RTX server would be the most cost effective. It is for production, namely to go though TB to PB of data, it is better to use high-end scalable servers. So one can train a single model in parallel efficiently.

          Monday, June 10, 2019 | 3 minutes Read
          Hero Image
          Learning PyTorch Part I

          Introduction Currently, I am participating the deep learning part1v2 course as an “international fellow”. This course is taught by Jeremy Howard from fast.ai. The course is not available to the public yet, but it will be in future. During the course, Jeremy introduced PyTorch and the fastai package built on top of PyTorch. Before this, I only used Tensorflow and Keras. PyTorch is quite different (in a good way). I am very impressed by the elegant and flexible design of PyTorch. I would like to introduce some features I think interesting.

            Monday, November 13, 2017 | 3 minutes Read
            Navigation
            • About
            • News
            • Publications
            • Education
            • Experiences
            Contact me:
            • yren@bnl.gov
            • yhren
            • Yihui (Ray) Ren

            Liability Notice: This blog is for informational and educational purposes only. The content provided here represents personal opinions and is not intended as professional advice. Readers should not rely solely on this information and are responsible for their own actions and decisions. This blog is not liable for any damages or consequences resulting from the use of its content. The views expressed here are my own and do not reflect those of my employer or any funding agencies. © 2017-2025 Yihui Ren. All rights reserved.


            Toha Theme Logo Toha
            © 2017-2025 Yihui Ren. All rights reserved.
            Powered by Hugo Logo