- Temrel
- Posts
- Get LLMs in your organisation
Get LLMs in your organisation
How will your organisation get the best use of Large Language Models?

Large Language Models (LLM) are here to stay. They’ll be disrupting every type of information work over the next few years. As such, any organisation that wants to retain a competitive edge needs to get on the LLM train right now. Today we’ll look at various ways to use LLMs through the prism of Cost and Complexity (represented in the graph as Expensive and Hard, respectively).
NO-CODE
Two no-code methods exist right out of the box, and generally these can be integrated into a company instantly with or without a few dollars subscription fees per month, per user.
Prompting: involves feeding the language model a seed phrase or question to generate a desired response, steering the model's output without any additional model training. This is how you would interact with ChatGPT at https://chat.openai.com, for example.
Few-Shot Prompting: Help the model understand better what you want by giving the model with a few examples of the desired task before the actual prompt, aiming to guide the model to respond in a specific way.
ENRICHED
Here we’re enriching existing models hosted elsewhere (typically a 3rd-party service). We don’t have exclusive access to the data, so anything we share can be used to train publicly available models.
Retrieval & Prompting: This involves using the model to retrieve relevant information from a provided dataset before generating a response, thus integrating external information with the prompt-response sequence. You could provide a URL to a document, for example.
Iterative Refinement: Iterative refinement involves generating an initial output, then revising and refining it in subsequent steps to improve the response quality. Whilst you can do this using a GUI in the no-code context, here we’re referring to retraining the model programmatically.
ENTERPRISE-SPECIFIC
Now we start to leverage some complex ML power to make models deliver inferences very close to what we want. We also make the leap into self-hosting so that any work we do remains accessible only to us.
Fine-tuning Hosted: Fine-tuning hosted refers to adjusting the parameters of a pre-trained model provided by a hosting service to better fit specific tasks, leveraging the service's computational infrastructure.
Fine-tuning Open Source: Fine-tuning open source involves adjusting the parameters of an open-source pre-trained model to better fit a specific task, giving more flexibility to adapt and use the model as per the user's specific requirements.
Training Open-Source from Scratch: This involves building a machine learning model using open-source frameworks and tools, starting from the very beginning without any pre-training. The model is trained entirely on the our data, allowing full control over the process.
$100M+ CLUB
Finally, we’re creating a foundational model from scratch. This will completely cater to our fine-grained requirements, but will require at least $100m to produce, not to mention the combined efforts of some of the world’s best data scientists, data engineers and MLOps people.
Training Custom from Scratch: This refers to developing a unique, bespoke machine learning model, starting without any pre-training. This approach caters specifically to the user's needs, allowing the highest degree of control over the model's design, implementation, and training process.