I am trying to write an assembly language program to read a 16-key matrixed key pad using Port A of the MCU in single-chip mode. It involves splitting Port A into 2 parts: 4 input pins, and 4 output pins, using the data direction register for port A, DDRA. Zeroes are then sent out on all 4 output pins simultaneously, and, when a key is pressed, it connects one of those zeroes to one of the input pins, which the MCU then reads at port A and subsequently figures out in which row the key that was pressed is located. The internal port A pull-ups are activated so that a floating pin will read in as a 1.
Here is the part of the code that's causing the problem:
ldaa #%00001111 ;now find ROW
staa DDRA ; 4 output pins, 4 input pins
bset PUCR,#$01 ;enable pull-ups on PORTA
ldaa #$F0 ;send 0s on column pins
staa PORTA
ldaa PORTA ;check which row is 0
[software trap here]
The last line, reading Port A, is the problem. When I hold down a key on the key pad, it does force one of the Port A pins low, as verified by the oscilloscope while the program is running, while the other 3 bits of the high nibble remain high, due the pull-ups. I would expect, therefore, to find in the A register one of the bits--7, 6, 5, or 4--to be a 0, and the rest ones. However, the A register always reads 0000xxxx. I.e., all of the high nibble bits are 0.
So, when I read Port A, why do I not see the logic levels that I actually measure on the Port A pins? Adding to the conundrum, when I single step through the code using the DBUG-12 monitor, it works perfectly!
I am using the Dragon12-Light development board.
It is driving me crazy, so I will appreciate any help. Thank you.