Hello Zohreh, and welcome to the forum.
I assume that you are using assembly code.
Since the result of a division (by 10 decimal) may be 16-bits length, the process becomes "long division" using the DIV instruction twice, once for each byte of the 16-bit value. After the first division (starting with H = 0), the remainder within H becomes the most significant byte for the second division.
This is demonstrated in the following assembly code sub-routine. Each decimal digit value is stored on the stack until four digits have been processed, and ACC contains the MS digit. Each digit is then converted to ASCII, and stored in BUF.
;**************************************************************
; 16-BIT BINARY TO 5-DIGIT BCD CONVERSION
;**************************************************************
; On entry, first two bytes of BUF contain binary number.
; On exit, the six bytes of BUF contain null terminated numeric
; ASCII data string.
CONVERT: LDX #4 ; Number of divisions required
STX BUF+5
LDX #10 ; Divisor
CNV1: BSR DIVIDE
PSHH ; Store remainder to stack
DBNZ BUF+5,CNV1 ; Loop for next digit
ADD #'0' ; Convert to numeric ASCII
STA BUF ; MS digit
CLRX ; Buffer index
CNV2: PULA ; Get value from stack
ADD #'0' ; Convert to numeric ASCII
STA BUF+1,X
INCX
CPX #4 ; Test for maximum digits
BLO CNV2 ; Loop if not
RTS
DIVIDE: CLRH
LDA BUF
DIV
STA BUF
LDA BUF+1
DIV
STA BUF+1
RTS
Regards,
Mac
Message Edited by bigmac on
2008-11-03 02:53 PM