Just Think AIStart thinking

GlossaryTerm

Temperature

A dial that controls how random or focused a model's output is.

Temperature scales the probability distribution the model samples from when picking the next token. Temperature 0 picks the most likely token every time — deterministic, focused, repetitive. Temperature 1 samples close to the raw distribution — varied, creative, occasionally wild. Temperature 0.7 is the default for most chat applications.

Practical defaults: classification, extraction, code → 0 to 0.2. Reasoning, summarization → 0.2 to 0.5. Marketing copy, brainstorming → 0.7 to 1.0. Anything above 1.2 usually produces gibberish on frontier models. Top-p (nucleus sampling) is a related dial — most teams change one or the other, not both.

Bring this to your business

Knowing the term is one thing. Shipping it is another.

We do two-week AI Sprints — one term, one workflow, into production by Day 10.