Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blog: Unreasonable Effectiveness of literally using your brain #339

Closed
wants to merge 5 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
77 changes: 77 additions & 0 deletions docs/blog/posts/look-at-it.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
---
draft: False
date: 2023-11-26
slug: python-caching
tags:
- caching
- functools
- diskcache
- python
- rag
authors:
- jxnl
---

# Improving your RAG with Question Types

!!! note "This is a work in progress"

This is a work in progress I'm going to use bullet points to outline the main points of the article.

- I've been consulting some and a common question is "How do I improve my RAG?"
- Theres always the black box responses like 'lets add cohere' or 'lets change chunk size', but generic solutinos get you generic results.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

*solutions

- Instead my recommendation is to simply... look at your data.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My recommendation is to simply look at your data.

- Once we look at the data we'll have plenty of information we need to itentify the best intervention strategy and also to figure out where we might want to specialize our model.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

*identify


In this blog we'll cover a range of things that can lead us into the right direction. Go over some examples of companies that can do this kind of exploration. We'll leave it open ended as to what the interventions are, but give you the tools to drill down into your data and figure out what you need to do. For example if if google learns that a large portion of queries are looking for directions or a location, they might want to build a seperate index and release a maps product rather than expecting a HTML page to be the best response.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example, if Google learns that a large portion of queries is looking for directions or a location.("if" is repeated in your draft)


## What do I look for?

- The first time we shuold be looking is simply looking at the questions we're asking.
Copy link

@hurrrsh hurrrsh Jan 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

*should
The first thing we should be looking at is simply examining the questions we're asking.

- Find some inductive bias in the questions we're asking.
- IF we have a general idea (we could even do topic model, but we can cover that later) we can start to look at the questions we're asking.
- Then we can build a question type classifier to help us identify the question type.
- We can look at two types of statics, the distribution of question types and the quality of the responses
- Then by looking at these two quantities we can determine our intervention strategy.

1. IF counts is high and quality is low, we likle have to do something to improve the quality of the responses.
2. if counts is low and quality is low, we might just want to add in some logic that says "if question type is X, then do don't answer it and give some canned response"

### Consider Google.

You can imagine day one of google, they can spend tonnes of time looking at the data and trying to figure out what to do, but they can also just look at the data and see what people are searching for.

we might discover that there are tonnes of search questions that look like directions that doo poorly because there are only few websites that give directinos from one place to another, so they identiy that they might want to support a maps feature, same with photos, or shopping or videos.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

*do poorly
*directions


we might also notice that for sports games and showtimes, and weather, they might want to return smaller modals rather than a complete new 'page' of results. All these decisions are likely something that could be done by inspecting the data early on in the business
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

*models


## Unreasonable Effectiveness of Looking

Once you've looked you'll usually break it down into two categories:

1. Topics
2. Capabilities

### Topics

I see topics as the data that could be retrieved via text or semantic search. For embedding search it could be the types of text that isearched. "Privacy documents, legal documents, etc" are all topics since they can completed be generated by a searc hquery.
Copy link

@hurrrsh hurrrsh Jan 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

*is searched
*completely
*search query


**Failure Modes**

1. Poor inventory: Usually the topics fail its a result of poor inventory. If 50% of your queries are about privacy documents and you dont have any privacy documents in your inventory, then you're going to have a bad time.
2. Query Mismatch: This could be as simple as queries for "GDPR pocily" and "Data Privacy Policy" are both about the same topic, but based on the search method you might not be able to find the right documents.

### Capabilities

Capabilities are the things that you can do with your index itself. For example if you have a plain jane search index only over text. being able to answer comparisone questions and timeline questions are going to be capabilities that you can't do unless you bake them into your index. otherwise we'll embed to somethign strange "What happened last week" NEEDS to be embedded to a date, otherwise you're going to have a bad time. This is somethign we covered a lot [Rag is more than embedding search](./rag-and-beyond.md).
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

*Being able to answer comparison
*otherwise we will embed it to something strange
*This is something


Heres some more examples of capabilities:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here are some more examples of capabilities


1. Ownership of content: "Who uploaded the XYZ document about ABC"
2. Timeline queries: "What happened last week"
3. Comparisons: "What is the difference between X and Y"
4. Directions: "How do I get from X to Y"
5. Content Type: "Show me videos about X"
6. Document Metadata: "Show me documents that were created by X in the last week"

These are all families of queries that cannot be solved via the embedding and require creative solutions based on your use case!