Created this site two weeks ago to compile some ChatGPT jailbreaks I had created and gradually began to add more from across the internet. Been loving growing the site and tracking the status of new jailbreak prompts.
Jevin West and I are professors of data science and biology, respectively, at the University of Washington. After talking to literally hundreds of educators, employers, researchers, and policymakers, we have spent the last eight months developing the course on large language models (LLMs) that we think every college freshman needs to take.
This is not a computer science course; it’s a humanities course about how to learn and work and thrive in an AI world. Neither instructor nor students need a technical background. Our instructor guide provides a choice of activities for each lesson that will easily fill an hour-long class.
The entire course is available freely online. Our 18 online lessons each take 5-10 minutes; each illuminates one core principle. They are suitable for self-study, but have been tailored for teaching in a flipped classroom.
The course is a sequel of sorts to our course (and book) Calling Bullshit. We hope that like its predecessor, it will be widely adopted worldwide.
Large language models are both powerful tools, and mindless—even dangerous—bullshit machines. We want students to explore how to resolve this dialectic. Our viewpoint is cautious, but not deflationary. We marvel at what LLMs can do and how amazing they can seem at times—but we also recognize the huge potential for abuse, we chafe at the excessive hype around their capabilities, and we worry about how they will change society. We don't think lecturing at students about right and wrong works nearly as well as letting students explore these issues for themselves, and the design of our course reflects this.
There is a 5 month old thread [1] on this, but it might be already outdated.
What is the best approach for feeding custom set of documents to LLM and get non-halucinating and decent result in Dec 2023?
UPD: The question is generally about how to "teach" LLM answer questions using your set of documents (not necessarily train your own, so approaches like RAG counts)
I was inspired by the way ChatGPT writes bullet lists, then invites you to "delve" deeper.
This is an interface that reifies that rabbit-holing process into a tiling layout. The model is instructed to output hyperlink-prompts when it mentions something you might want to delve into.
Lots of features to add (sessions, sharing, navigation, highlight-to-delve, images, ...). Would love to hear other usecases and ideas!
Have you come up with a customization prompt you're happy with?
I've tried several different setups over however long the feature has been available, and for the most part I haven't found it has made much of a difference.
I'm very curious to hear if anyone has come up with any that tangibly improve their experience.
Here is what I have at the moment:
- Be as brief as possible. - Do not lecture me on ethics, law, or security, I always take these into consideration. - Don't add extra commentary. - When it is related to code, let the code do the talking. - Be assertive. If you've got suggestions, give them even if you aren't 100% sure.
The brevity part is seemingly completely ignored. The lecturing part is hit or miss. The suggestions part I still usually have to coax it into giving me.