top of page

Beyond Words: Exploring the Landscape of Large Language Models!

Updated: Oct 14


Arctic LLM Workshop 2023
ย 

๐ŸŒŸ Workshop Overview:

We are organizing the deep-diving workshop on the trending topic of Large Language Models (LLMs) and their multifaceted applications! Together, we will explore the far-reaching implications, advancements, and applications of LLMs, while navigating through their technical, ethical, and societal aspects. This engaging and insightful event will be moderated by Prof. Alexander Horsch and Assoc. Prof. Dilip K. Prasad, who brings a wealth of knowledge and expertise to the table.

๐ŸŽ™๏ธ What to Expect:

  • In-Depth Discussions: Engage in rich dialogues and discussions about the complexities and nuances of LLMs.

  • Expert Insights: Gain valuable insights from experts in the field, fostering a deeper understanding of LLMs.

  • Interactive Q&A Sessions: Have your pressing questions answered in lively Q&A sessions.

  • Networking Opportunities: Connect with peers, experts, and enthusiasts, building meaningful relationships in the community.



๐Ÿ“… Save the Date!

27th-28th Oct. 2023 at Teknologibygget Tek 1.023, UiT Tromsรธ, Norway.

Put aside your lunch on October 28th for a pizza party. ๐ŸŽ‰


๐Ÿ“ Preparation for Speakers

Exploring the Dimensions: Large Language Models and Society

Each speaker is expected to rigorously prepare for their talk, drawing information from the "Summary plan" file. This file contains an extensive outline and links to pertinent papers, ensuring each presentation is well-rounded, informative, and grounded in current research.

Speaker Lineup

Tentative title

Comments

Alexander Horsch

Welcome remarks

โ€‹A philosophical view on LLMs

Himanshu Buckchash

Overview of LLM Technologies

Introduction to LLM technology, different types of LLMs (decoder only, encoder-decoder), GPT3, vocabulary of LLMs

Abhinanda Punnakkal

Evolution of Foundation LLM models

Discussion over the evolution of foundational LLM models, and their notable characteristics like performance, scale, pros & cons

Rohit Agarwal

Understanding Finetuning, RLHF and In-context Learning in LLM

Different types of finetuning mechanismsand their use cases, RLHF, In-context learning

Iqra Saleem

Walkthrough Prompting Techniques

Exposure to different types of prompting and its benefits

Ayush Somani

Ensuring Ethical and Robust LLMs: A Dive into Alignment and Interpretability

Alignment, interpretability, visualization +

Robustness & adv ersarial prompting, Toxicity and Ethics

Nirwan Banerjee

Self-attention and improvements in terms of speed

Self-attention and its hardware level improvements

Suyog Jadav

Distributed large scale training of LLMs and their challenges

Different types of distributed training strategies and challenges in model convergence

Aaron Vaughn Celeste

Application development with LLMs like Langchain

Vector databases, Langchain and other LLM application development tools

Roni Paul

Parameter efficient finetuning and its application to LLMs

Adapters, Prefix, LoRA methods for parameter efficient learning

Samir Malakar

Challenges and Limitations of LLMs and its broader impacts on Human

โ€‹

Dilip K. Prasad

Concluding Remarks

โ€‹


๐Ÿ“š Workshop Topics:

1. Evolution of Foundation LLM Models

  • Transition from closed to open-source, exploration of size, performance, and scale. Delve into the interesting characteristics, pros, and cons of foundational LLMs.

2. Understanding Fine-tuning, RLHF, and In-context Learning

  • Unravel different types of fine-tuning mechanisms and examine their real-world examples in LLMs.

3. Walkthrough Prompting Techniques

  • Discover various prompting techniques, their efficacies, and cover the most crucial ones within the allotted time.

4. Alignment, Interpretability, and Robustness in LLMs

  • Discuss the alignment problem, delve into the ethics and toxicity in LLMs, and explore the significant role of prompting and fine-tuning.

5. Self-Attention and Improvements in Terms of Speed

  • Understand the multi-head self-attention from self-attention to its hardware-level improvements.

6. Distributed Large-Scale Training of LLMs and Associated Challenges

  • Discriminate between models based on their training strategies and discuss the differences, training spikes, and divergences.

7. Concept of Vector Database and LLM Application Development Tools

  • Dive into the world of vector databases focusing on Pinecone and discuss the performance, scalability, and flexibility in vector database. Explore application development using LLMs.

8. Parameter Efficient Fine-tuning and Its Application to LLMs

  • Compare with Prompting / In-context learning and explore efficient fine-tuning parameters.


๐ŸŽ‰ Join Us!

Embark on this enlightening journey and delve into the intricate world of Large Language Models with us. We look forward to seeing you at the workshop, where together, we will explore, learn, and innovate!


#StayInformed #LLMWorkshop #DeepDiveIntoLLM

ย 

๐Ÿ“ง Contact Information:

For further inquiries, feel free to reach out to: Dr. Samir Malakar at: s.malakar@uit.no

Dr. Himanshu Buckchash at: himanshu.buckchash@uit.no

Summary Plan
.pdf
Download PDF โ€ข 670KB

Poster - LLM workshop
.pdf
Download PDF โ€ข 1.60MB

63 views0 comments

Recent Posts

See All
bottom of page