In computing and electronic systems, binary-coded decimal (BCD) is a class of binary encodings of decimal numbers where each decimal digit is represented by a fixed number of bits, normally four or eight. In byte-oriented systems (i.e. most modern computers), the term unpacked BCD normally imply a full byte for each digit (often including a sign), whereas packed BCD typically encodes two decimal digits within a individual byte by take advantage of the fact that four bits are enough to represent the range 0 to 9.

COMING SOON!

```
/**
* Modified 07/12/2017, Kyler Smith
*
*/
#include <stdio.h>
int main() {
int remainder, number = 0, decimal_number = 0, temp = 1;
printf("/n Enter any binary number= ");
scanf("%d", &number);
// Iterate over the number until the end.
while(number > 0) {
remainder = number % 10;
number = number / 10;
decimal_number += remainder * temp;
temp = temp*2; //used as power of 2
}
printf("%d\n", decimal_number);
}
```