How do you quantize an expression? And would a computer program be considered a math proof in its limits?

41 Views Asked by At

0 is an Expression, 1 is an Expression, Infinity is an expression.

If I want to express that during my Educational days or as a student I was disinterested not in the information that I was getting but disinterested in the methods that the information was analysed/conveyed/processed according to what was known at that time in a scientific manner without sounding demeaning of people who thought me. Basically I want to convey is I wasnt an attentive student for Maths or Physics as a subject but I was good at Electronics especially Embedded systems. And at that time I probably didn't realize it but I was processing information in this other subject and I applied myself in programming 8 bit microcontrollers/microprossesors using Assembly code, then started programming in Higher Level Languages but something stuck with me in programming 8 bit using assembly code as this was the first job that I got.

I was representing numbers, mapping memory and data lines. Most interesting is when substraction or division is involved given that your limit as quantity and representation is only between 0 to 2**8.

How do you represent a number such as 4/3 or 1-3 in this system. You can represent it as a float/double/int and varying representations of the same idea with varying presicions in many different programming languages but here my limitations of 2**8 comes into play. In my number system in 8 bit a real quantity can only be 0 to 64. Negatives have no physical representation, Overflow/Underflow are a concern and given the limitations of the Hardware I am limited to how much resources I can expend on pursuit of presicion. We have to deal with Clock cycles.

The place I am coming from a 8 bit programmer in our field we had to study Chip Fabrication to understand our Microcontrollers. So we had to study physics especially Maxwells Equations and also Mosfets and tip toes in Maths as well as Physics.

Now in my academic days when I first came accross Maxwells equations and this is after first using coloumbs laws, then faradays laws, gauss, then de broglie, Plancks Constant and ultimately Schrödingers equations all of which was cursory. Calculus was understood by me as just something I had to do without real understanding of the symbols. But what I did understood was binary.

What I have reliazed in my pursuit of representation of 4/3 or any other ratio or substraction as well as addition or operation that breaks my number system is that I am quantifying things. My 2 is the Quanta. I can take 4/3 at face value on intialization which would be 1 in quantity unless it encounters 12, 3/4 where it collapses into a precise physical representation with the bounds of number system and physical limitations of the system. Example 1-3 in traditional systems is minus (-)2 but physically its non existent an 8 bit microcontroller will either represent that as 2**64 -1 or 0 depending on the architecture. Unless it encounters 2 in future statements. My main point is I can't express this in Mathematical terms but in my program I can express it with presicion and I think I might have a theory in my code in assembly.

The thing is I may or may not have found something but I want to present it in an unfamiliar field other than that I understand. Programmers might go it is interesting in reducing machine cycles by a bit mathematicians might go saying its a programming topic.

But I am expressing something from beginning to end in continuim. Some scholars may say calculus is advanced maths some might say its addition and substraction over period. But in my case the function itself knows its limits. Whether its counting between zero to one or arbitrary positive integer to infinity. It gives a precise value of what it calculates depending on where it stands.

Its self aware in a sense cause its not representing numbers its representing quantity and value so far I havent extended it to imaginary numbers like square root of -1 but it is possible.

I want to test its limits and present it to Mathematicians but having no formal knowledge of this subject I cannot present it in scholarly terms. So can I submit my findings atleast with the limits I worked on as a computer code????.

1

There are 1 best solutions below

5
On BEST ANSWER

It sounds like you're describing some form of symbolic computation in which your software manipulates exact representations of mathematical objects instead of just calculating with approximate floating-point numbers. Tools like this already exist and are very sophisticated; see e.g. Mathematica.