Pitch recognition is related to sound recognition. In some languages the pitch between sylables makes up unique words. In addition to getting the sequence of sounds right a singer has to get the pitch or melody of the sounds right to communicate the unique meaning.
I wasn’t tone deaf but it was hard to hear parts especially two parts simultaneously. Piano learning allowed me to learn melody separate from the accompaniment. I could transfer a melody to the keyboard one note at a time if the song was deeply etched in my being. I wasn’t able to hear a melody one time and memorize it if it was more than a few measure it. Checking to see if my pitch was right by listening to myself was harder than trying to match a person singing next to me. Listening for the lowest note (bass) or the highest note (melody) was much easier that the parts that where between parts in the composition.
Eventually, after a couple of years of choir practice and performances and familiarizing myself with many songs, I could sing the bass part after some practice. I guess notation is like a language because it was easier to quickly chime in to what I heard next to me or the part on the piano than just ready notes. Ready notes gave me a headups on the pattern, timing, and realtive up or down. I never really locked in to the notes My voice, and listening to accompaniment or others singing bass was more helpful for auto correcting. It wasn’t all that bad. Most of the time I find my notes within millisecond of others and sometimes with lots of practice, I could sing from memory.
Everything is relative so from my standpoint going from thinking I could sing parts to actually contributing to the richness of our section, was possible because something happened in my brain. My ability to recognize and memorize sounds improved. Sometimes, I could actually hear harmonical parts while singing a separate part. This took time and effort, that I know now was harder far most of the general public.