How Olympia AI work

Olympia AI
2 min readApr 19, 2024

A Large Language Model (LLM) that consists of four essential parts — Transformer Architecture, Tokens, Context Window, and Neural Network is the foundation of Olympia AI models.

1. Architecture of Transformers.

A significant volume of data or documents are sent to the LLM during pre-training in a context window. Language processing is based on the Transformer Architecture, which includes both encoding and decoding parts. While the encoder produces outputs, the decoder understands inputs.

2. Points.

Sentences are broken down using a parser as inputs are processed by a tokenizer. Sentences are analyzed by the tokenizer, which takes into account things like whitespace for word separation. In my experience with FAST Search, we made significant efforts to maximize search accuracy using several tokenization configurations (e.g., N=2, 3, 4…).

3. Window Context.

How far the processor digs into the given data is determined by the context window.

4. The parameters of a neural network.

By comparing the number of nodes and layers, the Neural Network analyzes correlations for every token. Lines are used to indicate the connections between nodes in one layer and those in another. For these links, weights and bias terms are allocated. The layer’s node count multiplied by the number of connections plus the number of nodes in the subsequent layer is how the bias term is computed. The definition of parameters is given by the total bias terms across layers. A 41-parameter model is described in the accompanying diagram.

Training data (the vast amount of information gleaned from the internet) must be provided to the LLM via a Neural Network. The accuracy of the LLM model is determined by the training data. The understanding will measure the significance of how one word influences another.

WebsiteTwitterCommunity

--

--

Olympia AI
Olympia AI

Written by Olympia AI

New paradigm when DePin meets AI.

No responses yet