binary to decimal Algorithm

The binary to decimal algorithm is a mathematical process used to convert a number from binary (base-2) representation to decimal (base-10) representation. Binary numbers are composed of digits 0 and 1, which represent 'off' and 'on' states respectively, and are widely used in digital systems and computing. Decimal numbers, on the other hand, use ten digits (0-9) and are the most commonly used number system in everyday life. The binary to decimal algorithm enables a smooth transformation between these two numerical systems, which is essential for various computer programming and data processing tasks. The algorithm works by multiplying each binary digit (bit) in the binary number by the power of 2, corresponding to its position in the number, and then summing up the results. To perform this conversion, one starts from the rightmost bit (the least significant bit) and moves towards the leftmost bit (the most significant bit) while keeping track of the position. For each bit, its value (0 or 1) is multiplied by 2 raised to the power of its position. After performing this operation for all bits in the binary number, the final decimal value is obtained by summing up the results. This approach is efficient and straightforward, making it an essential tool in the field of computer science and digital electronics.
/**
 * Modified 07/12/2017, Kyler Smith
 * 
 */

#include <stdio.h>

int main() {

	int remainder, number = 0, decimal_number = 0, temp = 1;
	printf("/n Enter any binary number= ");
	scanf("%d", &number);

	// Iterate over the number until the end.	
	while(number > 0) {
	
		remainder = number % 10;
		number = number / 10;
		decimal_number += remainder * temp;
		temp = temp*2;		//used as power of 2
	
	}

	printf("%d\n", decimal_number);
}

LANGUAGE:

DARK MODE: