I studied Philosophy and Mathematical Logic at Tufts. Specifically, I gravitated towards the topics of consciousness and the mathematical formalization of natural language — two areas of central discussion today in AI. Oh also, I got to work with Daniel Dennett. Daniel Dennett!!
Below are two papers I'm particularly proud of. If you're a logic nerd, reach out and let's debate.
LLMs are surprisingly good at decoding implied meaning — tell one "She just got a raise" in response to "Those are expensive shoes" and it connects the dots. In my paper, I called these propositional implicatures and proposed a bracket notation 〈C & D〉 that separates the uttered claim from the implied one, allowing either to be independently negated. The harder problem, both for the paper and for modern NLP, is what I called instructive implicatures — sarcasm, irony, understatement — where unstated meaning doesn't add a proposition but transforms how to interpret one. The paper introduced a speculative "instructions operator" In(H) but honestly, I couldn't fully formalize it. NLP research has since confirmed the same gap: context-dependent meaning transformation remains a frontier problem in language understanding.
The proof that implicatures are formally notatable:
∀x[Px → (Ux ∨ Ix)] ∀x(Px → Cx) ∀x(Cx → Bx) ∀x(Bx → Nx) ∴ ∀x[Ix → (Cx ∧ Bx ∧ Nx)]
All propositions are programmable; all programmable propositions are Boolean at base; all Boolean claims are negatable. Therefore implicatures are programmable, Boolean, and negatable.
Transformer attention mechanisms are, by design, access-consciousness machines — information made selectively available for downstream processing. The open question is whether that access ever constitutes experience. My paper argued it must, attacking Ned Block's influential distinction between access consciousness and phenomenal consciousness through a transitive chain: if all phenomenal states are reportable, and all reportable states are accessible, the two can never come apart. If the Dennett view my paper endorses is correct, sufficiently rich internal states in AI systems might warrant moral consideration without a bright dividing line. The paper's reportability argument is now central to AI interpretability: if a system can't report its internal states to anyone — including itself — is it experiencing anything at all?
The proof that access and phenomenal consciousness are inseparable:
∀x[(Px → Rx) ∧ (Rx → Px)] ∀x[(Ax → Rx) ∧ (Rx → Ax)] ∴ ∀x[(Px → Ax) ∧ (Ax → Px)] ∴ ¬∃x[(Px ∧ ¬Ax) ∨ (¬Px ∧ Ax)]
All phenomenal states are reportable and vice versa; all accessible states are reportable and vice versa. By transitivity, phenomenal and accessible states always co-occur. There is no case where one exists without the other.