Can anyone help me solve this? Converting to decimal from binary that is signed...so also using twos-compliment.
10010101
+ 01111111
--------------
100010100
Here is where it starts going wrong I think... Change the 1s/0s to 0s/1s and add 1
100010100 -> 011101011
011101011
+ 1
-----------
011101100
$(2^0\times0)+(2^1\times0)+(2^2\times1)+(2^3\times1)+(2^4\times0)+(2^5\times1)+(2^6\times1)+(2^8\times1)+(2^9\times1)\Rightarrow 0+0+4+8+0+32+64+128+256 = 492$
But the correct answer is $20$ according to my book, idk how to get it any help?
You are using 8-bit two's complement binary representation. After adding the two 8-bit numbers, you should discard the bit that resulted from overflow, keeping only the last 8 bits. That is, the result of the addition is $00010100_{(2)} = 20_{(10)}$.