I keep seeing this idea floating around that “to use AI, you need to be an ML person.”

You don’t.

Most of the time, AI is just an API. If you can send a POST request, you can use an LLM.

This post is the simplest on-ramp I know: Java + HttpClient + one prompt. No frameworks. No magic. Just a clean call and a response.

The mental model: “AI is just an API”

Think of an LLM like any other service you call.

You send JSON.

You get JSON back.

Everything else is engineering: timeouts, error handling, and keeping secrets out of Git.

Step 1: Store your API key safely (3 tiny steps)

Don’t put keys in code. Don’t put keys in a properties file that gets committed. Keep it boring and safe with environment variables.

macOS / Linux (bash/zsh)

export AI_API_KEY="your_key_here"

If you want it to persist, add that line to ~/.zshrc or ~/.bashrc and restart your terminal.

Windows (PowerShell)

setx AI_API_KEY "your_key_here"

Close and reopen the terminal after running that.

Step 2: A small Java program that calls an LLM

This is intentionally “first contact.” We’ll send a prompt and print whatever comes back.

import java.net.URI;
import java.net.http.*;
import java.time.Duration;

public class Main {
  public static void main(String[] args) throws Exception {
    String apiKey = System.getenv("AI_API_KEY");
    if (apiKey == null || apiKey.isBlank()) {
      System.out.println("Missing AI_API_KEY env var");
      return;
    }

    // Replace with your provider endpoint
    String url = "https://api.your-llm-provider.com/v1/chat/completions";

    String prompt = "Explain Java records like I'm a busy developer.";
    String body = """
      {
        "model": "YOUR_MODEL_NAME",
        "messages": [
          {"role":"user","content": "%s"}
        ]
      }
      """.formatted(prompt.replace("\"", "\\\""));

    HttpClient client = HttpClient.newHttpClient();
    HttpRequest request = HttpRequest.newBuilder()
        .uri(URI.create(url))
        .timeout(Duration.ofSeconds(30))
        .header("Authorization", "Bearer " + apiKey)
        .header("Content-Type", "application/json")
        .POST(HttpRequest.BodyPublishers.ofString(body))
        .build();

    HttpResponse<String> response =
        client.send(request, HttpResponse.BodyHandlers.ofString());

    if (response.statusCode() != 200) {
      System.out.println("Request failed: " + response.statusCode());
      System.out.println(response.body());
      return;
    }

    System.out.println(response.body());
  }
}

A couple notes:

  • The endpoint URL and request JSON shape can differ a bit depending on provider.

  • The goal here is not “perfect abstraction.” It’s “make one call and see it work.”

Step 3: Parsing JSON (keep it simple)

At this stage, printing the JSON is fine. It proves the integration works.

If you want to extract just the assistant text, use Jackson (recommended). This keeps you out of fragile string-slicing.

Add Jackson (Maven):

<dependency>
  <groupId>com.fasterxml.jackson.core</groupId>
  <artifactId>jackson-databind</artifactId>
  <version>2.17.2</version>
</dependency>

Then parse what you need (example structure will depend on your provider). The main point: once you parse into a small POJO/record, your Java code stays clean.

Why this matters (and why Java fits AI work)

Calling an AI API is easy. Turning it into something reliable is the real job.

Java shines here because:

  • strong typing helps you keep request/response contracts sane,

  • concurrency is mature (and modern Java makes it even nicer),

  • it’s easy to wrap “AI calls” into a real service with timeouts, retries, and logs.

This first post is just the seed. Next, we can do “structured output,” tool calling, and evals. But you shouldn’t need a 12-part series just to say hello to an LLM.

What’s next on aijava.dev

I’m planning follow-ups that stay practical:

  • structured output (stop parsing free-form text)

  • retries and backoff (so your service doesn’t fall over)

  • local LLMs (Ollama) for privacy and cheap iteration

Keep Reading