Professor Robert Truswell Inaugural Lecture

Recording of Professor Robert Truswell's Inaugural Lecture

H a uh, a, Hello.  Good  evening,  everybody. It's  my  great  pleasure  to welcome  you  all  to this  really  special  occasion. Tonight,  we  celebrate Professor  Rob  Trowell's  inaugural  lecture, marking  his  promotion  to chair  of  syntax  and  semantics. It's  a  really  proud  moment,  not  just  for  Rob, but  also  for  the  School  of  Philosophy, Psychology  and  language  Sciences, which  I'm  still  new  head  of. It's  a  happy  and  proud  moment  for  me,  too. So  far,  I've  attended,  I  believe, four  inaugural  lectures  within  PPLS. This  is  the  first  time  I  get  to introduce  the  protagonist, AE  Rob,  whose  research  I  really  admire, and  Rob  also  has a  big  role  within the  school  as  postgraduate  director, and  he's  a  real  pleasure  to  work  with. I'll  tell  you  a  little  bit  about Rob's  academic  journey, which  is  both  impressive  and  inspiring. He  began  with  a  first  class  degree in  modern  languages  from the  University  of  Oxford, which  was  followed  by  an  M Phil  in  general  linguistics and  comparative  phlology  also  at  Oxford, then  a  PhD  in  phonetics and  linguistics  from  UCL. His  career  has  taken him  quite  far  and  quite  wide. So  from  a  British  Academy postdoctoral  fellowship  here  in  Edinburgh, he  went  to an  assistant  professorship  in  Ottawa, and  eventually  returned  to Edinburgh  in  2014  as  a  Chancellor's  fellow. And  since  then  he  has moved  through  the  ranks, culminating  in  his  appointment  to this  chair  in  2023. So  as  we  mark  this  occasion, it's  really  wonderful  also  to  have Rob's  family  here  with  us. His  brother,  his  mother,  his  wife, and  his  son  are  here  with  us  tonight, making  it  an  even  more  special  event. And  when  I  asked  Rob if  I  could  perhaps  include a  personal  anecdote  that  they might  be  able  to  relate  to  especially, he  gave  me  a  response  that  I  really can't  resist  chair  Um,  you  said,  Well, my  family  might  actually  prefer a  really  dry  and  academic  introduction because  they  don't  normally  get the  opportunity  to  experience  that. So  I'm  trying  to  deliver  that. But  I  do  feel compelled  to  say  a  little  bit,  as  well, about  Rob's  hobbies,  which are  anything  but  dry  and  staid. So  outside  of  academia, Rob  is  a  passionate  mountaineer, orienteer  ultra  Runner. I'll  admit  I  had  to  look  up  what an  ultra  runner  actually  is  and  does. So  the  prefix  ultra comes  from  the  Latin  for  beyond. So  the  question  then  is, but  beyond  what,  right? So  it  turns  out  that  it's runners  who  run  beyond the  distance  of  a  marathon. So  if  a  marathon  isn't  enough, you  know,  you  can  become  an  ultra  runner. So  typically,  these  rounds, I  understand  are  over  30  miles, and  I'm  sure  Rob  can give  us  more  details  if  you're  curious. For  our  purposes,  I'd  say  that  this potentially  also  tells  us something  about  Rob's  character. So  running  a  marathon  isn't  enough. He  wants  to  go  beyond  that. And  whether  it's  on  the  trail  or really  in  the  study  of  syntax  and  semantics, I  think  Rob  embodies  that  sort  of determination  and  perseverance  and  endurance. And  interestingly, recently,  Rob  told  me  about  his  son, who  is  an  excellent  runner in  his  own  right  and  has, you  know,  recently  completed  a  half  marathon. And  I  thought,  when  I  meet  his  son, he's  probably  about  Rob's  height, and  he's  probably  about 18-years-old,  but  he's  not. He's  so  so  I'm  really,  really  impressed. So  well  done  to  you,  young  man, you  clearly  share  some of  your  father's  traits. In  linguistics,  Rob's  research  focuses on  the  fascinating  relationship between  syntax  and  semantics. He's  especially  interested  in topics  such  as  WH  movement, the  history  of  relative  clauses, and  the  architecture  of  grammar. He  has  authored  three  monographs, events,  phrases,  and  questions  in  2011, syntax  and  its  limits  in  2013, and  coordination  and  the syntax  discourse  interface  in  2022. He's  also  edited the  Oxford  Handbook  of  Events structure  in  2019, and  he  has  led  major  projects  such  as an  AHRC  and  DFGFunded  investigation  into locality  and  argument  adjunct  distinctions. Currently  coe  on  a  Lieberhum  project  on the  question  around  the  autonomy  of  syntax, where  he  looks  at  romance, causal,  and  perception  verbs. It's  fair  to  say  that his  contributions  are  rigorous, insightful  and  wide  ranging. And  in  this  talk,  I  gather he's  going  to  try  and  present  a  kind  of unified  picture  across all  these  various  contributions that  he's  made. So  the  lecture  tonight  promises  to reflect  the  qualities  inherent  in  Rob's  work. The  topic,  the  search  for  a  simple  theory  of syntax  targets  a  key  area  of  tension in  much  of  sort  of linguistic  theorising  and  bridges disciplines  really  across  linguistics, philosophy  and  psychology,  which  is  why  it's a  really  sort  of  fitting  lecture to  take  place  within  this  school. Um,  so  it's  now my  great  pleasure  really  to  hand  over the  lecture  and  welcome Professor  Rob  Troswell  to deliver  his  inaugural  lecture. Please  welcome  me  in  giving  him a  warm  round  applause. Thank  you,  though. Lovely.  Thank  you. I  didn't  really  want  that  to  stop. That  was  charming,  and  I  wish  I  could be  so  articulate  about  myself. So,  yes,  I  have become  professor  of  syntax  and  semantics, and  there  is  a  part  of  this  process  right  at the  beginning  where  you  have  to decide  what  you're  a  professor  of. And  it's  the  first  time where  I've  had  this  problem of  if  someone  says  to  you, what  do  you  do,  then  normally I  well,  you  know, and  go  into  a  kind of  four  paragraph  explanation. But  can't  put  that  after  professor. You  can  get  a  noun  phrase. And  I  my  first  try was  theoretical  linguistics, and  I  was  told  that was  too  much  of  a  land  grab. You  got  to  leave  some  linguistics for  other  people.  You're  taking  too  much. So  that  was  rejected.  Then  I  was  thinking, well,  so  a  professor  of  the  word  wich. I  really  know  a  lot  about  wich. Um,  you  know,  it's  apparently,  like, you  know,  you  got  to take  a  sensible  amount  of  terrain. I  could  have  been  professor  of  syntax, but  we  have  one  of  those, and  you  can't  be  a  professor  of  more  syntax. Caroline  Haycock  is  professor  of syntax, and  I'm  not  going  to  fight  her  for  it. This  is  not  something  you determined  by  arm  wrestling. And  so  I  ended up  as  a  professor  of  syntax  and semantics  as  a  kind  of, like,  uneasy  compromise. But  even  this  is  not  quite  how  I see  what  I'm  trying  to  do  as  a  researcher. I'm  interested  in  syntax and  the  things  adjacent  to Syntax  all  the  way  around  them. So  I  was  wondering  if  it  was  too  late to  change  to  professor  of  syntax,  et  cetera. And  really  what  I would  like  to  do  is  to  focus  today  on the  balance  between  the  syntax  and et  cetera  because  one  thing  we  can't do  as  linguists  is  ignore  the  complexity of  linguistic  data  that  we're  faced  with. But  it  is  up  to us  as  analysts  to  determine  how  much  of the  explanatory  burden for  complexity  falls  on syntax  and  how  much  it falls  on  things  adjacent  to  syntax. And  in  this  talk with  a  search  for  a  simple  theory  of  syntax, I  can't  just  make  life easier  for  syntacticians. But  I  can  interrogate the  balance  between  the  syntax  and  et  cetera, and  that's  what  we'll  be  doing  today. This  is  not  a  new  point. Way  back  since  the beginning  of  generative  grammar, people  have  been  reiterating this  point  that  if  a  sentence  is  acceptable, that  means  there's  nothing much  unacceptable  about  it, but  there  are  many  different  ways in  which  a  sentence  can  be  unacceptable. So  here's  a  sample. We  have  some  famous  sentences  at  the  top, which  are  completely  grammatical. Probably  certainly  one  A is  completely  grammatical. One  B,  you  can  make  your  own  mind  up. This  is  something called  a  comparative  illusion. It  gets  worse  the  more  you  read  it. You're  welcome.  And  these are  sentences  where  the  problem is  probably  nothing  to  do  with  the  grammar, but  it's  something  about  assigning a  meaning  or  some  kind of  stable  meaning  to  it. Sentence  two  is  probably ungrammatical  for  syntactic  reasons, sentence  two  B,  probably  also. This  is  grammatical  in a  sense  where  you're denying  something  furiously, but  it's  not  grammatical  in a  sense  where  you talk  about  how  furiously  you  slept. These  probably  maybe  you  want  to say  really  syntactically ill  formed  in  some  sense. Sentence  three,  which  some  of my  poor  undergraduates  have  heard  me talk  about  far  too  much  already  this  week. This  is a  perfectly  grammatical  sentence  of  English, put  together  by  Jim  Rogers  and  Jeff  Pullam. If  you  don't  believe  me, this  is  how  it  works. You've  got  some  people  you got  some  people  next  to  the  people. But  these  people  left  these  people. So  these  people  left,  and these  are  the  people  people  left. You  got  some  more  people over  here  next  to  the  people, people  left,  and  these  people  left  them. So  people  people  left  left  people, and  these  are  the  people  people  left  left. And  then  they  left because  they  were  all  on  their  own. These  are  the  people,  people, people  left,  left,  left. It's  fine.  It,  it's not  the  most  useful  sentence  in  the  world. But  it's  not  ill  formed. We  just  can't  work out  what  it  means  in  real  time. So,  is  that  a  matter  of syntax  or  not?  Well,  probably  not. You  know,  this  might  be a  matter  of  sentence  processing or  something  adjacent to  syntax  in  that  respect, but  not  really  syntax. Oh,  it  was  Sam  who  ate  the  beans. There's  nothing  wrong  with  that, but  it's  not  a  good  answer  to  a  question. Tell  me  about  Sam.  What  did  Sam  eat? It's  a  good  answer  to  a  question. Tell  me  about  the  beans,  who  ate  the  beans. So  this  is  also  grammatical. This  is  also  unacceptable  for  a  reason, which  isn't  quite  syntax. Um,  so  the  point  here  is  that, there  are many  different  ways  to  be  unacceptable. Some  of  us  who  spend  our  life exploring  this  acceptability. And,  um,  it's  up to  the  analyst  to  work out  where  to  draw  the  lines. How  much  of  this  is  to  be explained  by  which  part  of the  kind  of vast  panoply  of  linguistic  theory. And  when  I  started  as  a  linguist, which  is  28  years  ago  now, this  is  my  suit  from  when  I  went  to  Oxford, you  have  to  have  a  suit  to  go  to  Oxford because  it's  a  bastion  and a  privilege  and  so  on. And  so,  Gran  Jan,  who's  in  the  front  row, bought  me  a  suit,  and I  thought  it  deserved  to  see  today, so  I  wasn't  going  to  dress  up, but  here  we  are. Um,  and  so  I  went  to I  started  in  Oxford  at  a  time  when  there  was, what  you  might  call  a  syntactocentric  trend. So  this  is  a  trend  to  land  grab for  syntax  and  explain as  many  things  as possible  in  terms  of  syntax. And,  um,  this  so I  found  a  photo  of  my  Syntax  textbook from  a  time  in  Oxford,  and  here  it  is. Ask  me,  age  19,  try  to  take  it  all  in. You  had  to  put  a  perspect  screen between  me  and  the  book  because  otherwise, I  would  skip  to  the  end  and  miss  bit. So,  so  it  was  very  big, and  every  chapter  was a  bit  more  technical  and  a  bit more  kind  of  niche  than  the  one  before. And  it  became  quite  kind  of  at  some  point, I  just  lost  the  will  to  live. I  think  it  was  Chapter  nine, this  was  one  of  the  things  we talked  about  in  Chapter  nine. It's  a  theory  of something  called  gamma  marking. We  don't  need  a  theory  about. It  doesn't  exist,  but  there  was  one. And  I  just  remember  kind  of  refusing. I  wasn't  going  to  I  just  got  zero  on an  assignment  because  why would  you  spend  your  life  doing  that? So  I  I  was being  juvenile and  immature  and  all  of  those  things, but  what  I  was  really  doing  deep down  if  I  could  interrogate  myself  now all  of  these  years  later  is  I was  refusing  to  believe  that this  was  a  useful  part  of  a  theory  of  syntax. It's  too  big,  it's  too complicated,  it's  too  niche. There's  no  way  that this  could  be  something  that  I could  have  learned  as  an  infant. There's  no  way  because  just  look  at  it. I  can't  learn  all  that  as  a  2-year-old. There's  no  way  that I  could  have  been  born  with  this because  this  is  not  part  of some  general  cognitive  endowment. This  is  something which  is  incredibly  specific to  parts  of  syntax. I  had  a  ill  formed  nascent  belief that  the  theory  of  syntax  must  be simple  because  if  it  gets  complicated, then  it  becomes  implausible. And  so  in  searching for  a  simple  theory  of  syntax, I'm  also  searching  for  one which  seems  plausible  as a  part  of  a  general  theory of  linguistic  cognition. And  that's  the  game  I've  slowly  learned  to play  since  I  left  the  Big  Book  behind. So  that's  what  I'm going  to  be  talking  about  today. There's  going  to  be  three  sections  to  this, and  this  is  not  a  place  where  we're  not  going to  leave  today  with a  simple  theory  of  syntax. I'm  really  sorry  if  I  mean, I  tried  to  make  sure  that  the talk  title  didn't  over  promise. It's  not  going  to  happen.  This  is  not  one  of those  kind  of  things  where  the  edifice comes  crashing  down. It's  rather  you  just  kind  of  chip away  at  this  bit  bit  by  bit  and  hope  that you  could  slowly  pursue this  kind  of  reductionist  programme. So  the  first  two  sections are  going  to  be  about  this  today. One  of  them  looking  at  one of  the  things  that  Villain  mentioned, which  is  WH  movement. That  will  be  the  first  section. And  one  of  them  looking  at  this  kind  of knotty  question  about  how Indo  European  is  strange. And  then  I'm  going to  indulge  myself  and  get  a  bit  more programmatic  at  the  end  and say  where  I  think  this  might  be going  when  all  the  patient chipping  away  at  the  edifice  has  been  done. First,  water  from  a  very  unsmart  bottle. Right,  onwards.  So  this  first  section, and  I  should  say  this  first  section is  I'm  becoming  more and  more  collaborative  as  I  get  older. I  try  in  an  ideal  world, I  would  collaborate  with  people  who are  better  than  me  at  finishing  things, but  that  doesn't  seem  to  be  the way  it  always  goes. But  this  first  section  is, there's  a  lot  of  input  in  here  from a  post  doc  in  Gettingen called  Kenyan  Brannon. And  there's  also  a  part in  the  middle  where  this relies  heavily  on  some  work I've  been  doing  with Caroline  Haycock  and  Elise  Newman. This  is  about  WH  questions  in  English. There's  a  recipe  for how  to  make  a  WH  question  in  English. You  start  with  a  sentence  like I  devoured  the  Twinkies, this  is  unacceptable, but  not  for  reasons  of  grammar. This  is  unacceptable  for  reasons  of  taste. You  say,  you  devoured  what?  You've  taken the  Twinkies  and  you've  turned it  into  a  WH  word. That's  already  a  question  at  this  point, the  more  common  thing  to  do  is to  then  take  that  question  word, the  WH  for  what  and  move  it  to  the  front  of the  sentence  and  you  get  something  like, what  did  you  devour? That's  WH  movement,  taking  that  WH  word or  WH  phrase  and  putting it  up  the  front  of  a  sentence. You  can  cover  a  very  large  distance with  operational  WH  movement, so  you  could  say  something  like,  what did  you  say  that  you  never thought  that  you  would  see  me  devour? And  it's  still  fine. It  takes  a  bit  longer,  it's  fine. But  at  the  same  time,  there are  fairly  simple  examples  of  WH  movement, which  are  just  not  possible. I'm  unacceptable,  probably  ungrammatical. So  I  can  say  I  devoured  the  Twinkies  with Vaashimi  I devoured  the  Twinkies  and  Misashimi. These  are  both  as  grammatical  as  each  other. These  are  both  as  repulsive  as  each  other. But  again,  this  is,  this  is  not  my  field. But  then  if  you  try  to  question  Rsahimi You  can  say,  what  did  you  devour  over Twinkies  with  and  everything's  fine. But  if  you  say,  what  did  you  devour  over Twinkies  and  something  has  just  broken. And  so  this  is  a  fairly  simple  case where  you've  still  got a  fairly  short  sentence, but  this  is  not a  well  formed  sentence  of  English  anymore. HagRoss  his  dissertation  in  1967. It's  called the  coordinate  structure  constraint, and  it  basically  says, Don't  do  that  WH  movement out  of  coordinate  structures. And  here,  the  twinkies  and the  sashimi  is  a  coordinate  structure. The  twinkies  with  the  sashimi  is  not. And  those  ones  fine. You've  got  to  say  something  about  those. I've  tried. That's  not  what  I'm  talking  about  today. I'm  talking  about  some  less  clear cut  cases  where  it's  harder to  know  exactly  what the  fact  of  the  matter  is, let  alone  how  to  kind  of integrate  your  analysis  of  a  fact  of  a  matter into  a  broader  theory  of  grammar. So  this  is  the  kind of  thing  I  was  working  on  for  my  PhD, and  this  is  particularly  doing this  WH  movement  out  of  adjuncts. So  adjuncts  are  the  optional constituents  in  a  sentence. So  you  can  say,  I  felt  unwell, and  that's  a  full  sentence  of  English. But  I  could  also  say  I felt  unwell  after  I  hit  the  Twinkies. And  that's  still  a  four  sentence  of  English. I've  added  this  extra  clause. It  didn't  have  to  be  there.  It's  optional. Those  are  the  adjuncts.  And  when I  was  growing  up  as  a  linguist, the  received  wisdom  was  about  moving  out  of an  adjunct  moving  a  WH  phrase out  of  an  adjunct  is  impossible. So  I've  got  a  condition  from Han  Huang  and  number  nine  on  the  slides  here, which  is  a  kind  of  technical  way of  making  it  impossible. Doesn't  really  matter  how  it  works. So  this  is  what a  syntactician  would  say  about  these. This  is  a  way  of  driving  in  syntactic  terms. The  fact  that  sometimes  if  you  try to  move  out  of an  adjunct,  it's  just  not  possible. Um,  and  by  not  possible. I  mean,  I  don't  mean  nothing  breaks. I  mean,  people  don't  do  it. It  doesn't  sound  good,  you  know, this  is,  you  know, the  police  don't  come  round. So,  um,  the  problem  here  is  that it's  not  as  this generalisation  is  just  not  robust. And  so  there  was  a  kind of  gradual  recognition  that, um  it's  not  that  movement  out of  adjuncts  is  actually  impossible. It's  more  let's  call  it  fragile. Um,  so  what  we've  got  in  ten  is a  bunch  of  different  cases of  moving  different  things  out  of  adjuncts, and  some  of  them  sound  better  than  others. So  what  did  you  feel  unwell  after  you  ate? It's  probably  marginal,  not  terrible. How  many  twinkies  did you  feel  unwell  after  you  ate? Probably  slightly  worse. And  the  only  difference  here  is, have  I  just  left  this  question as  a  bare  kind  of  noun  phrase? The  twinkies  could  be  an  answer  or  have I  questioned  a  precise  number,  like  seven? And  somehow  that's  made  a  difference. Or  you  could  say,  how  enthusiastically did  you  feel  unwell after  you  ate  the  twinkies? And  this  is,  again, fine  if  you're  talking about  feeling  unwell  enthusiastically. But  if  you're  talking  about  eating twinkies  enthusiastically,  this  is  not  good. So  it  seems  like  what you  move  makes  a  difference. But  also,  these  were the  sentences  that  really were  preoccupying  me  for  most  of  my  PhD. Just  sees  free.  Oh  I think  back  now  and  it's  like, why  did  anyone  let  me  do  that?  They  did. So  here,  we've  got  the  same  adjunct  case, we've  got  the  same  word. What  in  each  case? We're  moving  across  different  things. And  that  means  it  must  be this  part  in  the  middle must  be  making  a  difference. So  what  did  John  drive  Mary  crazy  whistling? Not  too  bad?  What  did  John arrive  whistling?  Not  too  bad? What  does  John  work  whistling, significantly  worse. And,  um  I  couldn't find  a  way  to  make  sense  of this  in  syntactic  terms, so  I  stopped  believing  that  this  was syntactic  and  so  I  stopped believing  that  the  question  of  when  you  can move  out  of  adjuncts  in  a general  case  is  syntactic. I  stopped  believing  that  things  like  this should  be  part  of  our  theory  of  grammar. But  these  patterns still  have  to  go  somewhere, particularly  with  our  second  pattern, I  started  to  believe  that  there was  a  semantic  element to  conditioning  when  this  movement was  possible  and  when  it's  not. Um,  so  I  started  looking  at models  of  event  structure  and  I started  using  this  very  simple  model. It  just  has  two  parts  to  event  structure. So,  um,  some  events  are  processes. So  that  would  be  like  running, for  instance,  um,  you  know, it  just  goes  on.  It's  just  a  process. It  has  no  intrinsic  endpoint  to  it. Some  events  are  culminations. They  don't  have any  particular  process  associated  with  them. They  just  happen  instantaneously. You  notice  the  commotion. The  explosion  happens,  whatever  it  may  be. And  some  of  both,  running a  half  marathon  is  a process  of  a  culmination. You  keep  going  and  you  cross  the  finish  line. So  that's  kind  of  a complete  maximally  complex  event. It's  a  process  leading  to  a  culmination. And  in  the  cases  where  the  movement  out  of the  adjunct  was  acceptable, I  noticed  that  you  could  smush together  the  description  of  what  was happening  in  the  adjunct  and the  description  outside  the  adjunct to  make  a  single  event  description. So  what  did  John  drive  Mary  crazy  whistling? There's  some  whistling  and  then  Mary's  crazy. What  did  John  arrive  whistling? There's  some  whistling  and  then  he  arrives, process  of  enculmination  in  both  cases. But  what  does  John  work  whistling? This  is  not  a  process  of  culmination. This  is  two  processes going  on  in  parallel  next  to  each  other. These  two  which  were  okay  look  like  they could  be  pushed  into a  description  of  a  single  event. This  one  which  wasn't  okay, didn't  look  like  that  could  happen. And  so  I  suggested this  condition  that  somehow this  was  being  conditioned, what  I  call  a  single  event  condition. You  can  do  the  WH  movement  if  you  can  form a  semantic  representation  where  the  adjunct and  the  host  form  a  single  event  description. And  that  was  my  PhD. Explain  three  sentences  in  250  odd  pages. And  when  I  say  explain, like  I  have  no  idea  why. You  know,  so  this  is a  very  limited  form  of explanation  because  I  left more  puzzled  than  I  went  in. And  that  was  me. Imagine  how  other  people  felt. So  fast  forward,  18  years.  My  God. And  well, actually  let's  fast  forward about  15  years  when Kenyon  Brannon  turns  up  on  the  scene and  says  intelligent  things and  solves  my  problem. Kenyon  has  been  encouraging  me  to  revisit the  question  of  why  a  condition like  that  might  hold  from a  completely  different  perspective. This  is  what  he  does. He's  kind  of  remarkable  at  it. He  encouraged  me  to  look at  a  phenomenon  called non  canonical  switch  reference. Switch  references  a class  of  morphemes  you  have  in many  languages  where  they occur  at  the  edge  of  a  clause, and  we  tell  you  the  subject of  this  clause  is  the  same  as the  subject  of  a  previous  clause  or  else  it's different  from  subject  of  previous  clause. And  we're  looking  at  those  morphemes, we're  looking  at  non  canonical uses  of  those  morphemes. This  is  what  is  called  in  the  literature, where  you  get the  same  thing  same  or  different, but  it's  not  regulating relations  between  subjects. It's  relating something  about  situations  instead. So  in  13,  this  is  from  Kiowa. This  is  from  Andrew  McKenzie's  PhD. We  have  Catherine  wrote  a  letter. This  is  in  the  context  of a  letter  writing  campaign. Everyone's  writing  to  their  senator because  we're  upset  about  something. Katherine  wrote  a  letter  and  same  subject. Esther  also  wrote  a  letter. Now,  Esther  is  not  Catherine. These  are  not  the  same  subjects, but  the  same  subject morpheme  is  being  licenced  still. And  McKenzie's  argument  is that  this  is  licenced  because this  letter  writing  situation  and this  letter  writing  situation are  part  of  a  larger  situation, the  letter  writing  campaign. Here's  an  example  from  Lakota, and  slightly  more  involved  as  time. So  two  young  men  were friends  and  same  subject, they  loved  each  other  very  much. So  far,  so  unsurprising,  this  is  all  fine. And  those  two  set off  to  war,  different  subject. But  it's  the  same  people.  This  is  not about  same  subject,  different  subject. The  reference  to  a  subject  has stayed  the  same  throughout. What's  going  on  here  is  something like  a  paragraph  break. The  same  subject  marker  here  is  telling  us, I'm  still  talking  about  the  same  idea. I'm  still  elaborating  on these  two  with  friends. The  different  subject  morpheme here  is  telling  you  next  paragraph, those  two  set  off  to  war. Um,  so  Kenyon's  point in  bringing  this  to  my  attention  was  that, firstly,  this  is  grammatical. These  are  actually  grammatical  morphemes. And  secondly,  um,  they're regulating  relations  between  something  a bit  like  events  of  the  usual  word, which  has  been  used  to talk  about  these  situations. I'm  not  going  to  get  into the  difference  between events  and  situations  today. Um,  thirdly,  there's  no  default. So  there's  no  marked unmarked  relationship  here. There's  one  morpheme  for  same  subject, there's  one  morpheme  for  different  subject. There's  one  of  them is  the  default  and the  other  one  is  the  marked  case. They're  both  grammatically  of  equal  status. Kenyans  idea  was,  what about  if  what  was  going on  in  my  little  adults  cases  was  really, what  did  John  drive  Mary  crazy? Same  situation  whistling. What  did  John  arrive? Same  situation  whistling. What  does  John  work? Different  situation  whistling. And  marked  in  some  similar  way to  the  switch  reference  markers. Now,  I've  come  to  believe  because  I'm  very easily  persuaded  by  such  things that  the  same  thing  happens  in  English. That  English  has  the  same  kind of  same  situation, different  situation,  ambiguity,  if  you  like, that  you  get  in  Kiowa  and  Lakota. It's  just  we're  not  smart enough  to  pronounce  this  difference. It's  all  just  in  some  nor  sense. So  to  make  this  argument, I'm  going  to  start  with  some  examples from  Munson  Stedman, is  a  very  famous  kind of  range  of  examples  now. This  is  about  when,  so when  is  primarily  a  kind  of  temporal  word. So  if  I  say  when  did  you  arrive, you  say  5:00,  it's  asking  you  about  time. But  if  you  look  at  the  examples  in  15, it's  clear  about  what  when  is  doing  in this  adverbial  use  is  nothing about  time  because  when they  built  the  39th  Street  Bridge, a  local  architect  drew  up  the  plans. Well,  the  plans  come  before  the  building. So  that's  not  coincident.  That's  before. They  built  the  39th  Street  Bridge, they  use  the  best  materials. So  that's  coincidence, the  building  happens  with  the  materials. When  they  built  the  39th  Street  Bridge, they  solve  most  of  their  traffic  problems, so  the  solving  follows  the  building. So  there's  no  temporal  constraint  being imposed  by  when  on the  relationship  between  the  building and  these  other  things.  It  could  be  anything. The  claim  is  instead  that  these  are  part  of some  larger  description  of an  event  or  a  situation or  something  like  that. That  all  seems  unimpeachable  to  us. But  the  important  point here  is  that  when  you  add  a  word like  approximately  or exactly,  that  disappears. And  suddenly  all  you  get is  something  strictly  temporal. And  in  fact,  you  can't  get  an  interpretation where  the  two  are  part  of  the  same  situation. So  approximately  when  they built  the  39th  Street  Bridge, a  local  architect  drew  up  the  plans. That  has  to  be  drew up  the  plans  for  something  else. Exactly  when  they  built the  39th  Street  Bridge, they  used  the  best  materials, not  for  the  bridge,  for  something  else. It's  crazy  to  say  if  you  talk about  using the  best  materials  for  the  bridge. Approximately,  when  they  built the  39th  Street  Bridge, they  solve  most  of  her  traffic  problems. I  has  a  kind  of  coincident  feeling  to  it. It's  not  really  because  she  built  the  bridge. It's  somehow  else  being  solved. They're  being  solved.  So  we  have when  being  used  to describe  a  single  situation  here, this  is  the  same  situation  as  Vs. This  is  an  elaboration  on  how the  building  happened  in  some  sense, we  have  a  different  situation  reading here  where  the  building  of  a  bridge is  unrelated  to  these  other  things. I  could  in  principle,  be  related  to  it. It's  just  we  don't  let  it  be  related  to  it. The  grammar  doesn't  let  it  be  related  to  it. That's  the  same  ambiguity  that  was  being marked  grammatically  in  Kiowa  and  Lakota. Here  it's  not  being  marked grammatically,  but  it's  still  there. So  now  we  can  go  from  there  and  we can  loop  back  towards  WH  movement. I've  helpfully  done  this without  WH  phrases,  but  look  at  17. We're  now  moving  a  topic  which is  pretend  it's  a  WH  phrase,  same  idea. Snakes  like  this,  you  need  to  be  careful when  you  touch.  Not  too  bad. Snakes  like  this,  you  need  to  be careful  precisely  when  you  touch. Things  have  gone  a  little  bit  wrong  there. Certainly,  worse  than  you'd expect  and  all  I've  done  is  put an  adverb  in  so  that's the  kind  of  thing  that from  a  syntactic  perspective, if  there  is  a  distinction  there, if  it  is  worse  when  you  add  precisely, it's  hard  to  see  why  that  would  be. But  if  we  go  back to  the  ideas  from  a  previous  slide, we  have  be  careful  when  you touch  the  snake,  one  situation. Be  careful  precisely  when  you touch  a  snake,  two  situations. So  we've  got  a  distinction in  how  this  is  interpreted. One  situation,  two  situations, same  kind  of  idea  as a  single  event  condition. Another  example  of  the  same  kind  of  thing from  Landau's  work  on  Hebrew, a  different  type  of  movement  operation, a  similar  type  of  effect. So  if  we  have  Gil  Slept during  the  lecture  or Gil  Slept  during  Rina's  lecture, this  has  two  possible  meanings. So  one  of  them  is  Gill  is  the  lazy  student, and  he  was  at home  asleep,  and  so  he  missed  the  lecture. Gil  slept  during  Rena's  lecture, and  that's  why  he  wasn't  there. The  other  possible  reading is  the  boring  lecturer  reading. Gil  was  in  Rina's  lecture. Rena  was  talking about  what  do  people  talk  about? That's  really  boring.  They  talk  about that,  whatever  it  was.  I  don't  know. Maybe  it  wasn't  anything,  maybe  it  was just  for  a  very  long  time. That  was  sending  Gil  to  sleep Gil  slept  during  the  lecture because  he  was  in the  lecture  and  it  send  him  to  sleep. There's  two  readings  of Gil  slept  during  Rina's  lecture. One  of  them  is  describing  two  situations, Gil  sleeping  over  here, Rena  lecturing  over  here. One  of  them  is  describing a  single  situation  where Gil  is  sleeping  in  the  lecture. Now  the  interesting  thing  is  that Hebrew  also  allows  this  operation  of possessor  extraction  where  you don't  say  during  Rina's  lecture, but  you  say  something  more  like Gill  slept  to  Rena  during  the  lecture. Same  meaning  different  syntax. And  in  this  case, suddenly  it's  disambiguated. So  you  can't  say  you  can say  Gil  slept  to  Rena  during  the  lecture, but  you  can't  continue  with and  that's  why  he  didn't  come. So  you  can't  have  the  two  situation reading  where  Gil  is  missing the  lecture  because  he's  sleeping  at  home. You  can  only  have  one  situation reading  where  Gil  is put  to  sleep  in  the  lecture. So  again,  we  have  the  same  kind  of pattern  in  a  different  type  of  movement. And  so  this  is  the  first  thing that  Kenyan  has  helped  me  with  here  is he's  made  me  see  how to  generalise  this  beyond the  cases  I  was  looking  at. And  also,  he's  shown me  how  to  link  this  to  this  kind  of established  grammatical  phenomenon  in the  world  switch  reference rather  than  just  being a  condition  in  its  own  right. But  where  I  get  excited about  where  this  is  going  is  I  think we  can  also  start to  make  sense  of  this  condition  now, sort  of  a  why  question that  was  puzzling  me  at  the  end  of my  PhD,  start  to  have  an  answer. And  this  is  the  answer that  we're  pushing  towards. So  these  WH  phrases  which  have  been  moved, they're  related  to  two positions  in  the  sentence. If  you  say  which  book  did  you  read, there's  a  position  at the  front  of  the  sentence, and  there's  a  position  after  read  position. This  is  going  to  be  translated  into some  logical  form  like for  which  book  X,  you  read  V  X. We  can  finest  the  details of  this  if  you  want. This  is  roughly  how  Danny  Fox  has  it. It's  more  Greek  in  Danny  Fox. We  also  have  reasons  mainly  from Paul  Albon  to  think  about any  of  these  things  for  which  book  X, you  read  VX  and  so  on. The  identity  of  our  X  is  going  to be  determined  relative  to  a  situation. So  now  imagine  that  you're  going  to have  for  which  book  X in  the  situation  at  hand, you  read  X  in  that  situation  at  hand. That's  fine.  And  if  you had  for  which  book  X in  the  situation  at  hand, let's  call  it  one,  you laughed  when  same  situation, you  read  X  in that  situation.  That's  still  fine. But  for  which  book  X  in  this  first  situation, you  laughed  approximately when  different  situation. You  read  X  in  that  different  situation, which  may  or  may  not  be  the  same  as X  from  the  first  situation  because  it's a  different  situation  and  you  can decide  what  goes  in those  situations  and  so  on. Suddenly,  you're  asking,  something which  seems  like  an  incoherent  question. I'm  talking  about  a  book  in  a  situation. I'm  switching  to  another  situation  with no  determinant  link  to  that  first  situation, and  I'm  asking  you  about a  thing  in  that  second  situation, and  I  don't  know  how  to  do  that. That's  certainly  not  what  a canonical  question  is  trying  to  do. So  skipping  over  many  details, please  don't  maybe  go  through  the  details. What  I  think  we  can  get  out  of this  is  now  we  have  a  way to  make  sense  of  where  I  was  with  my  PhD, because  we  can  see  how  what would  go  on  if  you  were  to  have  a different  situation  reading  is you  would  end  up  with an  incoherent  reading  of  a  question. You  would  be  trying  to  interpret the  WH  phrase  as a  stable  object  with  respect  to two  different  situations, and  we  don't  know  how  to  do  that. So  what  we've  done here  is  we've  moved away  from  the  syntactic  explanation. By  the  time  of  a  single  event  condition, by  the  time  of  my  PhD, we  were  hinting  at  a  semantic  alternative, but  we  didn't  really know  how  this  could  have  happened. But  now,  by  firstly  making  this  link  to situations  and secondly  making  this  link  between the  semantics  of  situations and  the  semantics  of  movement, we  can  start  to  propose  an  account  of why  extraction  from  adjuncts  is sometimes  okay  and  sometimes  not  okay, just  in  terms  of  the interpretation  of  these  things. So  syntax  specific syntactic  conditions  required. And  I'm  not  going  to  try  and  say that  any  of  this  is  simple. This  is  not  the  point.  I'm  not  trying  to simplify  the  analysis  of  language. I'm  taking  things  out  of  the  syntax and  distributing  them  in a  place  where  they  fit  better. And  if  we  can  do  that, then  hopefully  we  can  end up  with  a  simpler  analysis, several  moving  parts  interacting  in  a  way which  produces more  empirically  satisfying  results. Sean  still  not  looking  at  his  phone. You've  been  very,  very  brave. I  told  Sean  not  to sit  at  the  front  because,  you  know, if  you  sit  at  the  back,  you  could  get away  with  that  nonsense,  but  I'd  see  you. Okay,  so  we're  on  to the  second  case  study of  the  kind  of  thing  we  can  do  here. So  this  is  about  the  grammar  of  WH  words. Most  of  this  is  joint  work  with Nick  Gisbor  over  many  years  now, and  eventually  we'll  finish  it,  right? We  might  write  something. We've  done  that  once  or  twice. We  should  do  it  much  more  often. Um,  okay,  so  we're talking  about  the  WH  words. We've  just  met  them  for  the  first  time, the  Watts  and  so  on. In  English,  they  have  this  formal  similarity. They  all  start  with  who, what,  when  kind  of  thing. That's  an  accident  of  English. So  in  French,  you  got  things like  Ki  but  also  OU  and  Koman. Japanese,  you  got  Dari  Nani  Doko  and  so  on. So  we're  going  to  call  all  of  these  WH  words. We're  going  to  ignore  the  fact  that  we don't  look  like  WH  in  other  languages. It's  just  jobs  words  which  do this  job  or  WH  words regardless  of  how  they're  pronounced. Now,  most  of  these  words have  other  uses  as  well, and  probably  one  of  the  most  common  ones is  to  be  used  as  an  indefinite. So  in  German,  we  have  who  comes  there. There  comes  someone,  who is  the  same  word  in  both  cases, but  in  the  first  one  is  forming  a  question, in  the  second  one  is  being  used as  some  kind  of  indefinite. And  so  that's  one  thing you  can  do  with  a  WH  word. Another  thing  you  can  do  with  a  WH  word is  you  can  use  it  in  a  relative  clause. I've  chosen  an  example  from  Johan  Kreif here  because  he  is  an  innovator, I  guess,  has  been  an  innovator in  the  field  of  Dutch  WH  relatives. This  is  not  part  of  the  standard  language. This  is  just  a  thing  that  he  does. It  shows  you,  this  is  a  thing  that people  can  creatively  start  to  do. It's  not  just  a  thing  that  they've been  handed  down  from  the  sentences. So  the  mistake,  who  they  actually  make. This  is  something  that  he  said,  This  is not  grammatical  and  standard  Dutch, but  he  will  keep  saying  things  like  this, meaning  the  mistake  which  they  actually  make, but  he's  using  the  WH  word  to  do  this. Happens  in  English,  happens  in  French, happens  in  Johan  crafts  Dutch, doesn't  happen  in  any  well, it  happens  in  very  few non  Indo  European  languages. Um,  so  there's  two  kinds  of challenges  for  a  syntactician  here. The  first  one  is,  how  do  you make  sense  of  this  kind  of, you  know,  this  fluidity  in  what  WH  words  do? And  it  turns  out  that  over the  past  kind  of  20  years  or  so, the  understanding  of  the  links between  indefinits  and  interrogatives has  really  come  a  long  way. But  expanding  that  stuff  to relative  clauses  doesn't  happen naturally  for  most  of  these  theories. Um,  the  second  challenge  is, how  do  you  make  sense  of a  typology  which  says,  this  is  common, but  only  if  you're  in a  particular  language  family and  otherwise,  it's  really  rare. Syntacticians  have  to confront  statements  like  that, but  we're  not  really  well  equipped to  confront  statements  like  that. So  that's  what  we've  been trying  to  make  sense  of  for  a  long  time. Too  long.  There's  a  slide in  a  minute  which  came  from  my  job  talk  here. I  promised  I'd  solve  it.  I  will. Okay,  so,  um,  just to  start  kind  of sharpening  the  question  a  little  bit, we've  also  seen  that  these  WH  words are  associated  with  two  positions. There's  a  kind  of  canonical  position, like,  you  know,  you  at  what  that  position. There's  also  this  position  at the  front  of  a  clause, like,  what  did  you  eat? And  it  turns  out  that  these  two  positions  are associated  with these  different  functions  in  different  ways. So  if  you  see  a  WH  word  in a  relative  clause,  it's  always  at  the  front. It's  never  insert.  This  is one  of  Bruce  Downing's  universals. If  you  see  a  WH  word being  used  as  an  indefinite, then  by  default  in  situ.  It's  not  fronted. It  can  be  fronted  if  it's topicalized or  focused  or  something  like  that, but  it  won't  by  default  come  up  front. By  default,  it  will  come in  the  canonical  position. So  you  might  now  go  to  think,  well, there's  two  types  of  WH  word. I'm  going  to  give  these  names. If  you  have  some  semantics,  you'll probably  see  where  the  names  come  from. If  you  don't,  it's  not really,  they're  just  names. You  might  call  one  type  operators, and  you  might  say  that these  are  always  fronted, and  these  are  good  for making  relative  clauses  and  questions. I  can  do  these  two  things  here on  this  little  semantic  map  from Luhan  you  might  call the  other  type  dependents, you  might  say  these  are  usually  in  situ  and they're  good  at  making indefinites  and  questions, but  they're  not  good  at  making  relatives. And  so  then  we  could  rephrase  the  question. And  we  could  say  most  WH  words in  most  languages,  dependence. That  means  that  they  can be  questions  or  indefinites. In  many  Indo  European  languages, it  seems  like  our  WH words  have  become  operators. That  means  they're  good  at being  questions  or  relatives. And  in  many  other  languages, in  other  languages, that  generally  doesn't  happen. So  then  the  question  is  why? And  the  answer  is  parallel  evolution. And  this  is  parallel  evolution, and  I  love  this  slide, and  Sarah  made  this  picture. So  this  is  a  sabre  tooth  tiger. You've  probably  heard  of  them. It's  a  big  cat, and  it's  got  really  big  teeth. This  is  something  that you  may  not  have  heard  of. This  is  a  sabre  tooth  marsupial. These  also  exist. Well,  existed.  I  hope  they  don't  exist. Um  not  placental  mammals,  different  mammals. Really  big  teeth.  There  are no  sabertooth  birds. I  don't  think  there  are sabertooth  fish  or  insects or  anything  like  that. I  feel  like  as  a  bit  of  a  hostage to  fortune  but  defined  tooth. I'm  a  linguist  I  won't. This  means  that  there's some  kind  of  convergent  evolution  going  on, is  useful  to  have  really  big  teeth. But  this  convergent  evolution is  only  happening  within one  particular  family  or phylum  or  whatever  mammals  are. Um,  so,  you  know, you  have  to  you  have  to  be  within a  certain  genetic  grouping to  be  able  to  make this  adaption  in  the  first  place. But  within  the  mammal  family, independently,  subgroups just  keep  going  sabertooth. They  keep  developing  these  really  big  teeth. And  that's  the  same  thing which  is  happening  in  language. It's  just  that  language isn't  as  cool  as  this. This  is  amazing.  Why  don't we  all  study  sabertooth  koalas. And,  um  how  it would  look  for  parallel  evolution  to  work  in a  linguistic  case  would be  to  say  that  you  have a  bunch  of  genetically  related  languages descended  from  a  common  ancestor, and  they  have  cognate  forms, the  same  descended  form  across  the  languages. And  those  cognate  forms  can repeatedly  develop  similar  new  functions. And  that  doesn't  happen  so  much in  genetically  unrelated  language. So  well,  I've  just  described  with the  WH  relatives  in Indo  European,  that's  parallel  evolution. If  we  want  to  explain  the  distribution of  WH  relatives, we  need  to  look  for  a  way  to  make  sense  of parallel  evolution  in  linguistic  terms. Um,  so  that's  well, I'm  going  to  walk  through  briefly here,  where  we've  got  to  with  this. This  is  a  poor  diagram, but  it's  a  starting  point. This  is  what  we  think  you  had in  the  early  Indo  European  days. You  had  words  which  could  be  used  as interrogatives  or  as  indefinite. These  are  the  dependent  in case  I  was  just  describing. In  a  special  case,  you  can  use these  words  as  indefinites in  conditional  sentences. Let  me  show  you  what  this looks  like  in  hittite. This  is  extremely  early  Indo  European. I  cannot  begin  to explain  why  it's  written  like  this. There's  a  couple  of  things  to  notice  here. The  first  one  is  that  the  WH  words can  occur  in  different  positions. Here  we  have  which  words  at  the  left  edge. Here  we  have  slightly in  from  the  left  edge, a  slightly  lower  position. There's about  four  different  positions  that  have  been identified  for  WH  words  in  Hittite, and  they  have  different  interpretations. So  you  put  it  in  different  places depending  on  what  you  want  it  to  mean. So  that's  the  first  thing to  bear  in  mind  about  Hittite. And  the  second  thing  is  that it's  very  common  to  have  them in  what  are  called  asyndetic  conditionals. So  these  are  things  interpreted like  if  statements, but  they  don't  have  an  I  or  a. They're  just  two  clauses paratactically  shoved  up  against  each  other. So  this  is  so  in  the  future, who  after  me  becomes  king, meaning  so  if  in  the  future, anyone  becomes  king  after  me, where  who  is  being  interpreted like  anyone,  like  an  indefinite. But  you  could  also  gloss this  as  whoever  becomes king  after  me  in the  future,  blah,  blah,  blah, either  going  to  be  glossed  as something  like  indefinite  in  a conditional  or  something  like the  left  half  of  what's  called  a  correlative. I  put  this  on  the  board  for  HIT. This  idea  was  first  noted  by Avery  Andrews  in  his  PhD for  Vedic  Sanskrit  and has  resurfaced  in many  different  places  since. That  means  that  from  our  starting  point, this  little  bit  on  the  left, Have  this  kind  of  latent  ambiguity. If  you  have  an  indefinite  in  a  conditional, then  there's  always  the  potential  of  re analysing  this  as  a  correlative. So  rather  than  seeing  this  as  if  anyone, blah,  blah,  blah,  seeing  this  as  whoever, blah,  blah,  blah,  you'll get  the  same  truth  conditions, the  same  word  orders, nothing  much  changes  here. And  from  there,  all  hell  breaks  loose. You've  got  this  thing  being  used  as something  which  looks  like  a  relative  clause, and  you  can  do  whatever  you  want  with  it. But  you  always  seem  to  end up  with  different  types  of  relative  clause. This  is  the  point.  So  what I  have  here  is  a  kind  of  concatenation  of an  idea  from  Belief  and Hug  that  these  whoever type  conditionals  can  be reanalyzed  or  grammaticalized into  definite  conditionals. So  the  person  who,  blah,  blah,  blah, and  then  Audrey  takes  us  from  there into  nonrestrictive  relative  clauses and  into  restrictive  relative  clauses. This  is  what  they  put  on  board  the  Latin. I'm  not  going  to  start going  through  evidence  for  all  of  this. These  are  published  things  and  can  be  read. But  there  are  established pathways  of  evidence to  get  you  from  this  starting  point for  generalising  conditionals, generalising  correlatives into  other  types  of  relative  clauses. But  what  Nick  and  me  have  been noticing  is  that  different Indo  European  languages, once  they  get  to  this  point,  they  go in  all  sorts  of  different  directions. So  there's  no  single grammaticalization pathway  here  or  anything  like  that. There's  a  space  where languages  bounce  around  freely, but  they  never  get  out  of  that  space. So  keep  there'll  be different  diachrons  in  different  languages, different  pro  sets  of changes  in  different  languages. But  somehow  you  always  end  up with  different  types  of  relative  clauses. So  we've  started  talking about  this  as  a  locked  room. This  is  the  way  into  the  locked  room. From  here,  you  get  in  here, and  then  you  just  bounce  around inside  as  padded  cell  might  be  better. This  is  a  diagram slightly  enriched  from  the  last  one because  we  now  got  a  big  red  wavy  line. That  red  wavy  line  is the  pattern  of  what  happened  in  English, which  it  didn't  go  through this  BelvnHaugOdr  pathway  to get  from  generalising  correlatives to  other  types  of  relative  claws. It  did  other  stuff.  Um,  and  I'm  going  to just  briefly  show  what  some  of this  other  stuff  is  by  looking, first  of  all,  at Old  English  and  then  at  middle  English. And  all  I'm  trying  to  get  at  here  is the  histories  of  these  different Indo  European  languages  are  not  the  same. They're  different  from  one  to  another. We  all  have  a  common  point that  from  this  starting  point, you  spread  out  to  fill  different  parts  of the  typology  of  possible  relative  clauses. Um,  so  Old  English  in  questions, you  would  the  WH  word, you  would  put  it  at  the  front  of  the  clause. It  looks  quite  modern  in  that  respect. But  there  are  several  hints  of the  WH  words  still  dependents  in  Old  English. The  biggest  hint  is  that  you  still get  WH  indefinite, so  you  still  get  who  interpret it  as  someone  or  anyone. But  even  in  cases  where  you  might  think this  is  starting  to  behave like  a  relative  clause, you  still  get  some  hints  of  this  is an  indefinite  within  a  relative  clause. So  if  you  look  at  the  Bar WH  free  relatives  in  Old  English, these  are  the  only  types of  relatives  you'll  find  in Old  English  or  the  free  ones. Then  they're  all  in the  kinds  of  environments  that  Capo  ***** has  identified  as  licencing, only  indefinite  relatives. So  because  they  didn't have  anything  to  pay  you, anything  that  they  paid  you,  Um, so  what  we  have  here  is, this  is  a  free  relative syntactically,  so  what  they  paid  you. But  the  interpretation  is still  as  an  indefinite  one. It's  still  anything  they  paid  you. So  the  syntax  is moving  towards  modern  syntax, but  the  semantics  is  still, um,  this  earlier  indefinite  semantics. Um,  on  a  few  hundred  years, the  indefinites  have  disappeared. Unambiguous,  clearly headed  relatives  have  appeared. There  are  clear  signs  that  these  WH  words  are now  operators  in  the  terms and  before  not  dependents. This  is  an  early  WH  relative. Let  us  no  longer  see this  pain  in  which  we  have  long  been, in  which  is  the  WH  word  relative  clause. So  we've  gone  from  this  starting  point, which  we  could  kind  of recognise  in  Old  English  as  well. We've  got  to  the  same  endpoint  which we  can  recognise  in  English  and  Latin, but  we  haven't  gone  through  this  pathway. We've  gone  through  a  different  pathway, which  I  haven't  made  into  boxes because  life  is  too  short. And  so  different  diachronis to  get  us  from  the  same starting  point  to  the  same  endpoint. There's  different  pathways  converging  here. How  can  this  happen?  How  can  we  re  analyse the  WH  word  in  such  a  radical  way? Well,  once  you've  divorced once  you've  said  that  Old  English  confront it  WH  words  for  other  reasons, not  because  they  have  to  go  over  the  front, but  just  because  they  happen  to  in  questions. Turns  out  that  a  lot  of  the  time, it's  completely  harmless  to choose  either  of  these  analyses. You  can  patch  up  the  rest  of  your  analysis  to get  the  right  interpretation  compositionally. There  was  a  small  amount  of  evidence  in Old  English  for  a  dependent  analysis. As  the  WH  indefinite  or  all  of his  words  like  Hi  and so  on,  which  disappeared. There  were  a  few  new  words  in Middle  English  which  unambiguously behave  like  operators  and appeared  in relative  clauses  from  the  beginning. Slowly,  the  balance  shifted and  the  new  analysis  of  WH  words  came  in. What  this  tells  us  is  that  there's no  one  pathway  for  the emergence  of  WH  relatives. Every  language  we've  looked  at is  slightly  different. But  early  Indo  European  is a  fertile  breeding  ground for  a  fertile  evolution  ground  for his  parallel  evolution  because  it has  it  creates  an  environment in  which  for  kind  of necessary  reanalysis  and  natural. So  there's  flexibility  in the  position  of  WH  words, and  there's  a  correlation  of a  position  with  interpretations, so  there's  reasons  to  find  them  in different  words  in  different  cases. There  are  these  semantic  structures where  a  lot  of  the  elements  of a  semantic  structure  are null,  just  not  pronounced. What  that  means  is  that you  don't  know  exactly which  job  is  being  done  by  the  overt  morphe. You  know  it's  doing  something,  but working  out  which  bits  are  being done  by  the  overt  parts  and which  bits  are  being  done  by  the  null  parts, that's  where  re  analysis is  very  likely  to  happen. You  don't  know  what  the  role  of the  overt  morphemes  is because  you  know  that  there  are  too  many  jobs for  the  morphemes  you  can  hear, there  must  be some  null  stuff  floating  around. Now,  this  is  a  very  interesting  place to  end  up  in  if  you're interested  in theories  of  learning  and  change. And  I've  been  starting to  explore  this  by  starting  to  explore  this, I  mean,  since  2016, is  with  Simon  Kirby  and  Richard  Blige, and  more  recently  with  Dan  Lassiter and  Quan  guerrero  Montero  in  physics. Um,  because  it  turns  out  it  is  now a  description  of  quite  an interesting  dynamic  system. So,  the  classic  way of  thinking  about  grammar  change  is  that  you have  a  function  that  you're trying  to  realise  and  you  have  a  set of  forms  competing  to  realise  that  function. Do  you  move  the  verb  to I  or  do  you  insert  do  in  I? You've  got  two  forms  trying  to  compete. But  what  we've  just  been  talking about  in  the  history  of WH  words  is  not a  series  of  forms  competing  to  do  a  job. It's  a  series  of  functions  competing  to be  the  specification  of what  you  can  do  with  that  form. So  it's  the  other  dimension, the  form  is  stable  and the  functions  are  competing  as  opposed  to the  rock  dimension  where  the  function is  stable  and  the  forms  are  competing. And  in  a  general  case,  learners are  trying  to  do  both  of  these  at  once. They're  trying  to  associate some  set  of  forms  with  some  set  of  functions, and  we  don't  know  which  go  with  which, and  we  can't  make  any pre  judgments  about  that. Is  a  harder  task then  classical  discussions  of grammar  change  allow  for. And  if  it's  a  harder  task, then  it  means  there's  more  potential for  errors  by  Lerner. And  if  there's  more  potential  for  errors, there's  more  potential  for  interesting theories  of  change  to  emerge. And  that's  the  kind  of  dynamics that  Richard  and  Simon and  Dan  and  Juan  and me  puzzle  over  and  fail  to  understand, and  one  day  we'll  understand  that  as  well. What  does  all  of  this  tell  us  about  syntax, which  is  what  I  meant  to  be  talking  about? Nothing. There  is  no  implications of  any  of  this  for  the  theory  of  grammar. This  is  all  just  stuff  we can  do  with a  fairly  simple  theory  of  grammar. There  is  no  message  here. It  turns  out  that  none  of  this  is syntacticians  troubles  at  all. We  can  just  get  on  with  other  things. So  there  we  go. If  there's  another  kind  of type  of  syntactic  reductionist  work, we  can  just  stop  caring  about  these  things. So  that's  the  kind  of a  tentative  plea  for  simplification of  the  syntactic  analysis  of  WH  movement. And  it's  taken  far  too  long,  45  minutes. And  it's  just  a  drop  in  the  ocean. It's,  you  know, that's  just  the  starting  point. So  because  I  am  on  a  pedestal  today, I'm  going  to  just  say  all  the  other  things I  think  we  could  get  away  with  doing. I  can't  see  why  we have  transformationally  derived  Fs. If  you  don't  know  what  they  are,  y  you. If  you  don't  have  them,  you  have no  need  for  a  copy  theory, so  I  can  go  phases. They  were  invented  24  years  ago  now, 26  years  ago  now,  27. It's  going  up  by  a  second. And  still  no  one  can  tell  me  what  they  are. Syntactic  specification  of functional  sequences,  no. Where  is  this  going  to  be  specified? If  it's  in  the  Lexicon, And  why  aren't  lexicons for  different  languages  different? If  it's  not  in  lexicon,  where  is  it? If  I  can  go,  most of  locality  theory  isn't  syntactic. Relativized  minimality  can  get  a  free  pass. That's  quite  nice. Really  going  out  on  a  limb, I'm  not  sure  there's  a real  need  for  syntactic  selection. I  think  most  selection can  be  done  in  semantics. Um,  started  getting  through if  we  started  going  through  things  like  this, we'd  be  getting  towards a  really  minimal  theory  of  syntax. So  that's  what  minimalists are  meant  to  be  doing. And  I  am  a  minimalist,  although I  keep  that  quiet  as  much  as  I  can. And  if  we  keep  going  down  this, then  we're  going  to  get  to  this, this  is  our  theory  of  syntax. There's  just  nothing  there. And  I  don't  believe  that. I  do  want  to  end  up  with a  simple  theory  of  syntax. There's  this  lovely  quote  that I  keep  mentioning  from  Dan  Finer, which  is  the  goal  of syntactic  theory  is  to destroy  itself  from  within. And  I'm  not  interested  in  that. I  want  to  reduce  it,  but  I would  like  something  to  be  left  at  the  end. And  so  I'm  going  to take  the  last  few  minutes. We  are  going  to  end,  have a  look  at  what might  be  left  in  the  theory  of  syntax. I'm  going  to  start  by  looking  at some  work  which  is  now  really  quite  old, but  has  been  published  and has  been  finished  on the  grammatical  comparison  of a  binobo  and  a  human  infant. And  what  we're  doing  here  is  we're  trying  to isolate  things  that  the  human  infant could  do  that  the  Benobo  could  not  do. Um,  and  we  can  start  to  think  about  maybe this  is  a  kind  of  window  into species  specific  aspects of  syntactic  cognition. So  this  is  work  based on  what  I  call  the  Kanzi  corpus. It  came  from  Sue  Savage  Rumbaugh  colleagues. They  didn't  call  it  this.  They  were too  modest,  I  guess,  for  this. And  it's  660  English  sentences spoken  to  a  Benobo then  they're  all  instructions. So  then  you  watch  what the  Benobo  does  and  you  write  it  down. The  same  660  sentence  is  more  or  less spoken  to  a  human  infant  Alia and  writing  down  what  Alia  does. So,  for  instance,  Kanzi take  a  tomato  to  the  colony  room, and  Kanzi  makes  a  sound  like  orange. He  then  takes  both  the  tomato and  the  orange  to  the  colony  room. But  this  is  scored  as  correct  because it's  assumed  he  wants  to  eat  an  orange. Take  the  tomato  to Karen's  room  and  she  does  so. Put  on  the  monster  mask  on  your  head. Kanzi  drops  the  orange  while  he  is  eating into  a  monster  mask  and puts  the  mask  on  the  head. Ks  is  correct,  he's  scored  as  it  is assumed  that  Kanzi  wants  to continue  eating  the  orange while  he  has  a  mask  on. Not  that  he  misunderstands  the  request. Put  the  mask  on  your  head,  Alia  does. Lots  of  this  kind  of  thing.  I  would love  to  play  you  YouTube  videos of  this  kind  of  thing  being  done. They  are  heartwarming  and  baffling, but  I  signed  a  disclaimer  saying  that  I  would own  all  the  media  I  used in  this  and  I  don't  own  YouTube. So  look  them  up. They  will  abuse  you  and  then  terrify  you. Um,  across  the  660  trials, Kanzi  responded  correctly  71.5%  of  times. And  Alia,  the  human responded  66.6%  correctly. So  a  baseline  kind  of first  pass  figure  did  worse  than Benobo  on  understanding  human  language. And,  um,  I  wouldn't  read  very  much  into  this because  the  main  takeaway that  I  get  from  reading  the  descriptions  of Alia's  behaviour  is  that  she  is  bored. Kanzi  is  quite  motivated  by  oranges. But  how  does  Kanzi understand  what  he's  doing? For  the  majority  of  the  trials,  Kanzi, a  hypothetical  agent  could understand  what's  going  on  by  just knowing  what  the  words  mean  and  then  stirring them  together  in  some  non  crazy  way and  interpreting  the  results. So  no  grammar  at  all,  just  word  meanings smushed  together  any  old  way. If  you  add  in  some  kind of  basic  notion  of  plausibility, then  you'll  get  the  right  result most  of  the  time. So  that's  informative about  Kanzi's  vocabulary, which  is  really  impressive,  but it  doesn't  tell  you  about  grammar. You  can  go  a  step  beyond  this  so  you can  look  at  reversible  ditransitive  pairs. So  do  you  put  X  in  Y  or  do  you  put  Y  in  X? You  need  some  sensitivity to  linear  order  to  get  this  right, and  Kanzi  does  fine  on  these  cases. So  if  you  say,  put  the  tomato  in  the  oil, this  is  tomato  in  the  sense  of tomato  juice  or  put  some  oil  in  the  tomato, then  he  will  do  those  things  fine. There's  no  impairment  to  performance  for having  to  pay  attention  to  linear  order. There's  this  place  in  the  corpus where  his  performance  dips. This  was  noticed  by  Savage  Run  colleagues. This  is  my  I'm  not  the  first  to  see  this. But  their  take  was  to try  and  explain  it away  and  say  it  wasn't  really  there, but  I  think  as  far  as  we  can  see, as  far  as  the  evidence  in the  corpus  can  show  us, it  really  is  there,  and  this is  noun  phrase  coordination. And  so  in  all  of  these  other  cases, you  can  think  of  the  arguments of  the  verb  as  a  single  noun. So  you  can  think  of  this  as  just put  tomato  oil. Free  place  relationship, someone  putting  something  in  something, and  those  last  two  somethings are  just  one  noun  each. But  if  you  ask  Kanzi to  fetch  the  ball  and  the  rock, then  the  thing  which  is  being fetched  is  not  the  ball,  it's  not  the  rock. It's  the  ball  on  the  rock. It's  the  whole  unit, this  complex  phrase. And  at  that  point,  you  need notion  of  constituency. You  don't  need  to  make  a  big noun  phrase  or  anything  like  that, but  you  do  at  least  need  to  say  there's some  kind  of  thing  which  is  ball and  there's  some  kind  of  thing  which  is  rock, and  those  two  things  together make  a  unit  of  some  sort. And  Kanzi's  performance  on  these is  significantly  worse  than his  performance  in  the  rest  of  the  corpus. There's  only  18  trials, there's  not  very  many  like  this. But  if  you  look  at  them  and  you  find out  half  the  time  he  ignores  the  first  noun. So  give  the  water  and  the  doggie  to  Rose. He  just  gives  a  dog  to  Rose. Then  five  out  of  18, he  ignores  a  second  noun. So  Gib  the  lighter  and  the  shooter  rose. Kanzi  hands  rose  for  lighter  and then  obsesses  about  food  again. And  then  four  times  out  of  18, 22%  is  against  this  baseline  of 71  or  whatever  it  was percent,  he  does  the  right  thing. Give  me  the  milk  and  the  lighter, and  he  actually  does  this. So  this  is  a  significant  dip. There's  no  dip  in  Alia's  performance. Her  baseline  was  66%,  and  here  it's  68%. This  is  a  species specific  and  a  construction specific  dip  in  performance. And  I  would  interpret  that as  suggesting  that  the  ability to  form  this  kind  of  constituent  structure is  somehow  a  human  thing, a  very  genetically closely  related  species  isn't doing  this  in  response  to quite  a  lot  of  exposure  to  English. But  it  also  turns  out  that the  behaviour  of  Alia  was  maybe  not representative  of  behaviour  of typical  two  year  olds. This  is  a  remarkable  study from  Gener  and  Fisher. So  the  way  this  works  is  we've  got two  videos  being  played  simultaneously to  21  month  old  children. The  one  over  here,  you've  got  boy and  a  girl  doing  some  novel  action, playing  with  these  streamers in  a  coordinated  way, each  one  on  their  own,  not  interacting. Over  here  you've  got some  novel  transitive  action, tie  a  noodle  around  someone's  waist and  put  them  around  on  a  swivel  chair. This  point, this  one  is  doing  something  to  this  one. It's  a  dadic  thing  which  is  happening. Then  you  play  them one  of  these  three  sentences. The  boy  is  gaping  a  girl. The  boy  and  a  girl  are  gaping, the  girl  and  the  boy  are  gaping. If  you  play  them, the  girl  and  the  boy  are  gaping, then  they  all  look  over  here. They  all  look  at  the  coordinated  video. If  you  say,  the  boy  is  gaping  a  girl, they  all  look  at this  obviously  transitive  one. But  the  interesting  figure  if  the  boy  and the  girl  are  gaping, they  still  look  over  here. Now,  from  an  adult  grammar  perspective, that  makes  no  sense. But  if  what  you're  saying is  first  noun,  that's  the  agent. Second  noun,  that's  the  patient. Ignore  anything  you  know  about  constituency. Just  look  at  the  linear  order  of the  nouns  in  the  sentence. Then  you  would  end  up  with  something looking  like  this  again, sorry,  I  was  pointing  in  the  wrong  place. L  over  here,  they  should look  over  here  if  they  are  adults, but  they  looked  over  here. That's  a  surprise. It  seems  like  children  are  still  using linear  order  at  21  months  to  work out  which  one  of  these  two  videos  to  look at  rather  than using  hierarchical  phrase  structure. Of  course,  they  learn  soon afterwards  to  do  this  in  an  adult  like  way. They  can  be  encouraged  to  do this  even  when  they're  very  young, but  they  don't  automatically see  these  tree  structures. It's  just  a  thing  they  can  learn  to  do. So  what  that  means  is that  one  thing  which  seems to  be  a  competence  that  we  have  as  humans, a  cognitive  capacity  that we  have  as  humans  is the  ability  to  learn  these hierarchical  structures. It  seems  like  a  closely  related  species can't  learn  these  for  English or  at  least  hasn't,  in  that  case. But these  hierarchically structured  representations are  surely  not  unique to  syntax  or  to  language. I  think  Fitch  has  argued  this at  length  now  and  I'm  persuaded  at  least. So  if  there  is  anything  unique  to  syntax, it's  not  going  to be  these  hierarchical  structures. It's  not  going  to  be  this  constituency. It'll  be  something  else within  the  theory  of  grammar. From  a  minimalist  perspective, the  natural  place  to  look  would  be  agree, which  is  the  way  in  which we  induce these  non  local  relations like  movement  and  so  on. Other  theoretical  perspectives like  Stevens,  for  instance, citation  is  a  paper  I  love  called plans  affordances  and  combinatory  grammar, suggesting  that  all  of the  relations  you  need  in the  theory  of  grammar  can  also  be seen  in  an  analysis  of  planning. In  which  case,  maybe  there  would  be nothing  left  to be  distinctive  about  the  theory  of  syntax. Um,  that  would  be  nice. Syntacticians  turn  up  to  work  every  day. What  you're  studying  today?  Nothing. It's  all  been  done.  Okay,  so it's  time  to  wrap  up. So  humans  maybe  don't automatically  see hierarchical  phrase  structure everywhere,  but  we  do  learn  to  see  it. This  is  a  distinctive  cognitive  capacity. It's  distinctive  to  humans. It  may  not  be  distinctive  to  syntax, but  we  leverage  it extensively  in  our  natural  language  syntax. So  to  that  extent,  there's something  special  about  syntax. There's  a  reason  for syntacticians  to  get  out  of  bed. But  maybe  there's  a  bit  more  than  that. I'm  still  willing  to  believe  that  there's something  special  out  the  syntax of  non  local  dependencies. Other  people  may  disagree  with  that, but  there's  not  much  more  than  that. So  we  can  start  to glimpse  really  simple  theory  of  syntax  here. That  doesn't  mean  that  grammar  is  simple. Syntax  is  not  grammar,  syntax is  just  a  little  part  of  grammar. But  it  means  the  complexity  that  we  see, the  complexity  of linguistic  data  we've  uncovered  is going  to  emerge  from  interaction between  simple  systems  like  this. It's  not  going  to  be  a  product of  one  big  monolithic complex  structure  like  we  saw  before. And  yeah,  that's  what  I  will  professor  belt. Thank  you  very  much for  listening  and  for  being  here. There  are  some  tiny  references. M Form, thank. Thanks  so  much. That  was  absolutely  fascinating, really,  really  wide  ranging. I  mean,  you've  taken in  different  constructions  and linguistic  phenomena,  different  languages, different  stages  of  acquisition, different  periods  in the  history  of  various  languages, different  language  families, different  stages  of  acquisition. And  I  think  in  your  sort of  characteristic  modesty, you  sort  of  repeated  several times  that  you're  not  there  yet  and  that, you  know,  this  is  all very  much  work  in  progress. But  I  think  to  the  rest  of  us, it's  very  clear  that,  you  know, there  is  a  lot  emerging  here from  all  this  really  wide  ranging  work. So,  I  found  that,  deeply  impressive. So  thank  you  so  much. In  the  spirit  and kind  of  the  tradition  of  the  school, we  don't  normally  take  questions  at  the  end. I  did  ask  Rob  at  the  start, Would  you  like  to  have  any  questions at  the  end?  And  he  said,  Not  really. But  he's  very  happy  to continue  the  conversation  over  drinks, which  are  ready  at  in  the  foyer thanks  to  Ruth  from our  excellent  operations  team. I  think  mercifully  for  Rob, the  drinks  do  not  include any  twinkies  or  sashimi. And  so  I  suggest  we  thank  Rob  once  again  and then  join  Rob  for a  further  conversation  behind  this  doom. Thank  you.