There are 2 ways I know to convert decimal to binary, the first way I learnt to use is done by removing the highest power of 2 from the decimal value we have. Lets take 154 as an example. The powers of 2 used in binary for an 8-bit value are as follows: 2^0, 2^1, 2^3, 2^4, 2^5, 2^6, 2^7 which makes the values 1, 2, 4, 8, 16, 32, 64, 128. 8 bit binary can represent values upto 255 so for numbers larger than 255 you would need more bits. Using this first method we see if we can take the value of the most significant bit from the value without it going below 0. So we do 154 - 128 = 26 this value is above zero which means the bit representing 128 is 1. We then repeat this for all the other bit values until our number reaches 0. Next 26 - 64 = -38 so 0 for 64 then 26 - 32 = -6 so 0 for 32, 26 - 16 = 10 so a 1 for 16.then 10 - 8 = 2then 2 - 4 = -2then 2-2 = 0This ends with the values 128, 16, 8 and 2 being successfully subtracted and this can then be filled in to the binary result as follows:128 64 32 16 8 4 211 0 0 1 1 0 1 0and the result is 10011010.The second method uses the remainders from dividing the decimal value by 2 until it reaches 1.so for 154 we do:154 / 2 = 77 + r 0 77 / 2 = 38 + r 1 38 / 2 = 19 + r 0 19 / 2 = 9 + r 1 9 / 2 = 4 + r 1 4 / 2 = 2 + r 0 2 / 2 = 1 + r 0 1 / 2 = 1 + r 1. We then read the remainders from bottom upwards to get 10011010 which is our binary value converted from decimal. This second method is a bit harder to understand why it works but will always give as many bits as is needed to represent the decimal value in binary.