Ideas
Please feel free to steal.
Technical Ideas
- Transformers architecture
- It’s kinda weird that Transformers has to compute the next-token embedding by adding (residual connections) to the current token. It may mostly make sense, but why not have a zero input (or just a position embedding) and depend on the attention mechanism?
- What if only half of the embedding weights are tied between input and LM head?
wte = concat(A, B)
,lm_head = concat(A, C)
- What if each layer could attend to the keys and values of the previous layer? That would provide non-residual data flow between layers.
- Might need a layer embedding; perhaps a learnable offset vector layer-wise, or just a component that’s 0 for self layer and 1 for other-layer
- Related to PaLM’s “multi-key attention”
- Attention heads currently can’t turn themselves off. What if we always throw a zero Value in the mix, with a learnable Key?
- What if only some dims have residual connections?
- Could we get a Transformer where everything is in the same vector space, even queries and keys?
- The causal mask affects what can be learned from even long before the current token to generate. Is it really necessary?
- Could we summarize the past (replace spans of N tokens with N/2 tokens, perhaps?) to model longer sequences? The Perceiver architecture does this but maybe too aggressively?
- Interpretable models
- Decision trees with soft boundaries
- Meta-tree: one model gets to set the threshold values for a 2nd simple decision tree that actually makes the decision.
High-level goals
- A CoPilot that never shows something that’s incorrect?
- Reference material, contextualized
- Changes in test results if certain changes are made
- Dynamically generate a verifiable “compilation” from high-level goal to low-level implementation.
- A DSL generator?
- Compiler of ambiguous code, generating progressive reduction in ambiguity?
Tools
- clients for the OpenAI API to:
- systematically compare logprobs of different phrases in different contexts
- getting confidence bounds for these comparisons somehow, perhaps by adding “noise” to the context?
- visualizing embeddings
- systematically compare logprobs of different phrases in different contexts
- LMs to generate variations of class exercises
Future Blog Posts
- Superficiality in ML models
- it’s much more than we thought.
- opportunity to learn to distinguish the thing from the appearance of the thing. This can really help concept learning.
- Automate the boring parts of education
- maybe: more useful breadth, more human connection, fewer classes?
- opportunities for scaling up individual attention (vs replacing instructors)
- opportunity to scaffold the “algebra” of complex tasks, like graphing calculators do.
- related to the gratitude post about disconnecting from humans
- we’ve been doing this already: content farms
- truth vs popularity / engagement / what’s consistent with what everyone thinks.
- 2x2: change on need for human decision vs change in information (of outcome)
- Does AI make it easier to choose low-value work? Getting a dopamine hit for getting something “done” vs thinking, reflection.
- With RLHF and similar, we’re finding ways of programming models again, not just purely learning from data. But unlike hand-programmed logic, these programs will generalize in powerful and often surprising ways.