decimal to binary Algorithm

In computing and electronic systems, binary-coded decimal (BCD) is a class of binary encodings of decimal numbers where each decimal digit is represented by a fixed number of bits, normally four or eight. In byte-oriented systems (i.e. most modern computers), the term unpacked BCD normally imply a full byte for each digit (often including a sign), whereas packed BCD typically encodes two decimal digits within a individual byte by take advantage of the fact that four bits are enough to represent the range 0 to 9.
#include <stdio.h>
#include <stdlib.h>

#define MAXBITS 100

int main()
{

    // input of the user
    int inputNumber;

    // for the remainder
    int re;

    // contains the bits 0/1
    int bits[MAXBITS];

    // for the loops
    int j;
    int i=0;

    printf("\t\tConverter decimal --> binary\n\n");

    // reads a decimal number from the user.
    printf("\nenter a positive integer number: ");
    scanf("%d",&inputNumber);

    // make sure the input number is a positive integer.
    if (inputNumber < 0)
    {
        printf("only positive integers >= 0\n");
        return 1;
    }

    // actual processing
    while(inputNumber>0)
    {

        // computes the remainder by modulo 2
        re = inputNumber % 2;

        // computes the quotient of division by 2
        inputNumber = inputNumber / 2;

        bits[i] = re;
        i++;

    }

    printf("\n the number in binary is: ");

    // iterates backwards over all bits
    for(j=i-1; j>=0; j--)
    {
        printf("%d",bits[j]);
    }

    // for the case the input number is 0
    if (i == 0)
    {
        printf("0");
    }

    return 0;
}

LANGUAGE:

DARK MODE: