Arctic LLM Workshop 2023
🌟 Workshop Overview:
We are organizing the deep-diving workshop on the trending topic of Large Language Models (LLMs) and their multifaceted applications! Together, we will explore the far-reaching implications, advancements, and applications of LLMs, while navigating through their technical, ethical, and societal aspects. This engaging and insightful event will be moderated by Prof. Alexander Horsch and Assoc. Prof. Dilip K. Prasad, who brings a wealth of knowledge and expertise to the table.
🎙️ What to Expect:
In-Depth Discussions: Engage in rich dialogues and discussions about the complexities and nuances of LLMs.
Expert Insights: Gain valuable insights from experts in the field, fostering a deeper understanding of LLMs.
Interactive Q&A Sessions: Have your pressing questions answered in lively Q&A sessions.
Networking Opportunities: Connect with peers, experts, and enthusiasts, building meaningful relationships in the community.
📅 Save the Date!
27th-28th Oct. 2023 at Teknologibygget Tek 1.023, UiT Tromsø, Norway.
Put aside your lunch on October 28th for a pizza party. 🎉
📝 Preparation for Speakers
Exploring the Dimensions: Large Language Models and Society
Each speaker is expected to rigorously prepare for their talk, drawing information from the "Summary plan" file. This file contains an extensive outline and links to pertinent papers, ensuring each presentation is well-rounded, informative, and grounded in current research.
A philosophical view on LLMs
Overview of LLM Technologies
Introduction to LLM technology, different types of LLMs (decoder only, encoder-decoder), GPT3, vocabulary of LLMs
Evolution of Foundation LLM models
Discussion over the evolution of foundational LLM models, and their notable characteristics like performance, scale, pros & cons
Understanding Finetuning, RLHF and In-context Learning in LLM
Different types of finetuning mechanismsand their use cases, RLHF, In-context learning
Walkthrough Prompting Techniques
Exposure to different types of prompting and its benefits
Ensuring Ethical and Robust LLMs: A Dive into Alignment and Interpretability
Alignment, interpretability, visualization +
Robustness & adv ersarial prompting, Toxicity and Ethics
Self-attention and improvements in terms of speed
Self-attention and its hardware level improvements
Distributed large scale training of LLMs and their challenges
Different types of distributed training strategies and challenges in model convergence
Aaron Vaughn Celeste
Application development with LLMs like Langchain
Vector databases, Langchain and other LLM application development tools
Parameter efficient finetuning and its application to LLMs
Adapters, Prefix, LoRA methods for parameter efficient learning
Challenges and Limitations of LLMs and its broader impacts on Human
Dilip K. Prasad
📚 Workshop Topics:
1. Evolution of Foundation LLM Models
Transition from closed to open-source, exploration of size, performance, and scale. Delve into the interesting characteristics, pros, and cons of foundational LLMs.
2. Understanding Fine-tuning, RLHF, and In-context Learning
Unravel different types of fine-tuning mechanisms and examine their real-world examples in LLMs.
3. Walkthrough Prompting Techniques
Discover various prompting techniques, their efficacies, and cover the most crucial ones within the allotted time.
4. Alignment, Interpretability, and Robustness in LLMs
Discuss the alignment problem, delve into the ethics and toxicity in LLMs, and explore the significant role of prompting and fine-tuning.
5. Self-Attention and Improvements in Terms of Speed
Understand the multi-head self-attention from self-attention to its hardware-level improvements.
6. Distributed Large-Scale Training of LLMs and Associated Challenges
Discriminate between models based on their training strategies and discuss the differences, training spikes, and divergences.
7. Concept of Vector Database and LLM Application Development Tools
Dive into the world of vector databases focusing on Pinecone and discuss the performance, scalability, and flexibility in vector database. Explore application development using LLMs.
8. Parameter Efficient Fine-tuning and Its Application to LLMs
Compare with Prompting / In-context learning and explore efficient fine-tuning parameters.
🎉 Join Us!
Embark on this enlightening journey and delve into the intricate world of Large Language Models with us. We look forward to seeing you at the workshop, where together, we will explore, learn, and innovate!
📧 Contact Information:
For further inquiries, feel free to reach out to: Dr. Samir Malakar at: email@example.com
Dr. Himanshu Buckchash at: firstname.lastname@example.org