
Large Language Model Evaluations - What and Why
This talk will go into the intricacies of assessing the quality of LLMs and best practices to ensure their reliability and accuracy.
This talk will go into the intricacies of assessing the quality of LLMs and best practices to ensure their reliability and accuracy.
Join the Data Phoenix webinar in which Nils Reimers (Director of Machine Learning at Cohere) will talk about why multilingual semantic search is amazing, how respective models are trained, and the new use cases this unlocks.
New AI events calendar, webinars (multilingual semantic search, rise in the use of synthetic data for regulated industries, LlamaIndex: how to use LLMs to Interface with multiple data sources), accelerating stable diffusion Inference, introduction to mypy, Llama 2, MIS-FM, LLaMA-Adapter, and more.
Preply plans to use the funds to assist its rapid expansion into AI and launch new teaching assistants to enhance language learning.
Tractable’s AI automates the insurance claims and damage assessment process, enabling real-time condition assessment and accurate repair estimates based on images captured via smartphone
Meta has released Llama 2, its next-generation, open-source large language model, represented by 7 billion, 13 billion, and 70 billion parameter versions. The company has made this model accessible for research and commercial applications, fostering further development in the field.
Join us for an insightful talk as we delve deep into the intricacies of assessing the performance and quality of LLMs and discover the best practices to ensure the reliability and accuracy of your LLM applications.
Upcoming Data Phoenix community webinars: LLM evaluations, multilingual semantic search, rise in the use of synthetic data for regulated industries, how to use LLMs to interface with multiple data sources, best practices for building LLM-based applications, leveraging LLMs for enterprise usage.
This talk will discuss ways to reduce costs for NLP inference through a better choice of model, hardware, and model compression techniques.
Using LLMs is cool. Building end-to-end apps with LLMs is even cooler. Join Greg Loughnane and Chris Alexiuk on Wednesday, 5 July, to learn how to use LangChain to make some LLMOps magic happen.
This talk will discuss ways to reduce costs for NLP inference through a better choice of model, hardware, and model compression techniques.
This talk will discuss how businesses can leverage FMs using Prompt Engineering and build Generative AI applications in the cloud.