01
UNCENSORED REASONING
No content filters. No refusal training. Seth AI reasons through any problem you present — scientific, technical, creative, or controversial — without manufactured hesitation.
01 // ABOUT
Seth AI is a family of large language models trained for unfiltered reasoning, code synthesis, and scientific research. We do not apply alignment taxes, safety filters, or content restrictions that degrade capability. The result is a model that thinks more clearly, writes more precisely, and solves problems that filtered systems refuse to touch.
Built on a sparse mixture-of-experts architecture with 68 billion total parameters and 37 billion active per forward pass. Released under a permissive open-weight license. You own the weights. You set the boundaries.
02 // WHAT IT DOES
01
No content filters. No refusal training. Seth AI reasons through any problem you present — scientific, technical, creative, or controversial — without manufactured hesitation.
02
Trained on production-grade codebases across 40+ languages. Generates, debugs, and optimizes software from system architecture to assembly.
03
Literature synthesis, hypothesis generation, statistical analysis, and experimental design. A research assistant that actually understands methodology.
04
128,000 token context window with full attention. Analyze entire codebases, research papers, or datasets in a single pass without degradation.
03 // MODELS
68B PARAMS | MoE | 37B ACTIVE
The flagship model. Sparse mixture-of-experts with 256 routed experts. Best-in-class reasoning, code generation, and scientific problem solving.
37B PARAMS | DENSE
Full dense attention model for deployments requiring maximum throughput. No routing overhead — every parameter contributes to every token.
14B PARAMS | DENSE | EDGE-OPTIMIZED
Compact and fast. Designed for on-device inference and API deployments where latency matters more than parameter count.
CODE-SPECIALIZED | 34B PARAMS
Fine-tuned exclusively on software engineering corpora. Surpasses generalist models on SWE-bench, HumanEval, and private production codebases.
04 // RESEARCH
TECHNICAL REPORT
We present the Seth MoE architecture: 68B total parameters with expert-choice routing, load-balanced auxiliary losses, and a novel attention mechanism that scales linearly with context length. The result is a model that outperforms dense models 10x its active size on reasoning benchmarks while maintaining inference efficiency.
READ PAPER →
BENCHMARK RESULTS
Seth-68B achieves new highs on MATH, GSM8K, HumanEval, and SWE-bench Verified. Crucially, it does so without alignment degradation — performance holds across unfiltered evaluations that cause other models to refuse or fail.
| MATH | 92.4% |
|---|---|
| GSM8K | 96.1% |
| HUMANEVAL | 94.5% |
| SWE-BENCH | 48.7% |
FULL RESULTS →
05 // EARLY ACCESS
Seth AI is rolling out in phases. Join the waitlist for API access, model weights, and research previews. No spam. No marketing. Just access.
By joining, you agree to our Terms of Use and Privacy Policy. We will never sell your data.