Mathématiques 👥 Consumption & Competition Gazette 🌐 Public

Prime Numbers Part I The references

C
Consumption & Competition Gazette
↗ Version originale

By Miguel Vidal Bravo - Jandia

1 Principle

I tell you: what do the times tables of 4, 16, and 21 mean to you in mathematics and physics? And in terms of prime numbers?

In Mathematics: the times table of 4: Multiples of 4: 4, 8, 12, 16, 20, 24, 28, 32, 36, 40... 16 is in this table (4 × 4 = 16), so it's a perfect square. So we see a hierarchy of powers of 2 here.

The times table of 21: Multiples of 21: 21, 42, 63, 84, 105... 21 = 3 × 7, so it's not a prime number. In physics:

4: Number often linked to dimensionality (4 dimensions in special relativity with spacetime).

16: Number of cells in a 4×4 square (linked to matrices and transformations in physics).

21: May be reminiscent of the 21 centimeters of the hydrogen line, used in astronomy to study the structure of the Universe.

Prime Numbers

4, 16, 21 are not prime.

4 = 2 × 2, 16 = 2^4, 21 = 3 × 7. But! 21 is the sum of the first four prime numbers: 2 + 3 + 5 + 7 = 21.

a/ Definition: A prime number is a natural number strictly greater than 1 that has only two divisors: 1 and itself. In other words, a number is prime if it cannot be written as the product of two natural numbers other than 1 and p.

Examples of prime numbers:

2 (the only even prime number), 3, 5, 7, 11, 13, 17, 19, 23, 29, etc.

Counterexamples (composite numbers):

4 = 2 × 2 → not prime

16 = 2 × 8 or 4 × 4 → not prime

21 = 3 × 7 → not prime

Curiosities about prime numbers: The smallest prime number is 2. There are an infinite number of prime numbers (proven by Euclid). They are fundamental because every natural number can be factored into a unique product of prime numbers (fundamental theorem of arithmetic). The distribution of prime numbers is a great mystery in mathematics (Riemann hypothesis, etc.).

Well, if I always tell you in mathematics 1 + 1 = 3, whereas in classical arithmetic, obviously 1 + 1 = 2.

So please remember that 1 + 1 = 3 and has never been 2, I'll prove this to you later. I'll save the concept for later. It's just like you were taught that the square is or is written as 2, so Einstein wrote E = MC2, which means energy is equal to mass and the speed of light squared, but look at what I just wrote: square. I rewrite: "square." A square has never been 2, but 4.

You see where I'm going with this: I'm challenging learned concepts, particularly the way mathematical formulas and notations are interpreted. On E = MC² and the notion of "square," there's an interesting criticism: I point out that in E = MC², the symbol "²" is called a "square" because it represents a multiplication by itself. But in geometry, a square is a 4-sided figure.

So, I draw a parallel between mathematical language and everyday language, and I question their validity. Indeed, the notion of "square" in mathematics isn't necessarily intuitive for everyone. We use the same word to refer to:

A power (²): in algebra, x2 means x multiplied by x, which is a misnomer, in my opinion. A geometric shape (a quadrilateral with 4 equal sides).

But where E=MC² is clear is that the notation C² does indeed mean "C multiplied by itself," not a square in the geometric sense. This is merely a historical misnomer that has led to "power 2" often being called a "square," some, if not all, would say.

b/ Discussion: I seek to show that classical mathematics is a human construct and that it is not absolute. Thus, if we use language abuses, how can we be sure that mathematics is a rigorous and reliable description of the world? Language abuse in mathematics is apparently common, and mathematics often uses terms that are not strictly defined as in everyday language.

Example 1: The word "square" is used to refer to x 2x2, whereas a square in geometry does indeed have four sides.

Example 2: We say that a line "is infinite," but it is never "physically" infinite; it is an abstraction.

Mathematicians create formal systems, but since they must explain these systems in human language, they inevitably make approximations and language abuses. Thus, if mathematicians don't express themselves very well, why trust them? As in law, qualification is extremely important; from this, the applicable system is derived. So, if someone doesn't express themselves very well, why assume they're reasoning well? Thus, an unsatisfactory qualification leads to a false deduction or even false reasoning.

Example:

Newton provided "perfect" laws of gravity. But Einstein showed them to be false (and therefore approximate). Perhaps Einstein is also wrong, and we'll discover another, even more precise model? Therefore, we can never be 100% sure that our mathematics and observations are "true."

As Einstein said, people will have to fight not two or three, but four plagues: the world will have to face the nuclear bomb, the population bomb, and the information bomb (here, that makes three plagues; the fourth is the economic or money bomb). But he knew that classical equations were incomplete and that information could play a crucial role.

So, I persist and add that T = E = M c⁴, and I go even further by saying that T = E = MC⁴ = 0. Please take note; you will judge me later.

For now, I would like us to return to prime numbers to lay a good foundation for discussion. So, we say that a prime number is: A prime number is a natural number strictly greater than 1 that has only two divisors: 1 and itself. If I divide 4/4 = 1 and 4/1 = 4 and 3/3 = 1 and 3/1 = 3, I don't see the difference for my calculator or where the rule of strictly greater than 1 comes from.

Why does this rule really exist? It allows every integer (except 0 and 1) to be uniquely decomposable into a product of prime numbers. This property is called the fundamental theorem of arithmetic. Example: 12 = 2 × 2 × 3 = 2² × 3 ← prime factorization. Thus, if 1 were prime, this factorization would no longer be unique:

12 = 1 × 2 × 2 × 3 or 1¹⁰⁰ × 2 × 2 × 3…

And that would break all modern arithmetic. We've just demonstrated, if it hasn't already been done, that this is a mathematical convention, which explains why 0 and 1 are excluded from prime numbers e, whereas for me, 0 is an integer and is a divisor, right? But not considered a valid divisor unless I'm mistaken, at least not in the mathematical sense.

Hence the fact that we can never divide by 0. E.g.: 5 ÷ 0 → forbidden operation (undefined). Why? Because there is no real number that satisfies: 5 = x × 0 Because: x × 0 = 0 for all x

On the other hand, 0 can be divided by any non-zero number: 0 ÷ 3 = 0 0 ÷ 1000 = 0

Therefore, 0 has infinitely many divisors But it doesn't divide any number except 0 itself

I suggest you "break" conventions, or rather, open your mind with a broad mind, because it's a world of conventions. Thus, as has been suggested, mathematics is not "pure," it is constructed.

We can assume that 0 is a value.

II. Proof

My initial proof was to say 1 + 1 = 1 and never equaled 2 because one product plus one product can equal one product, not two products; for example: 1 product purchased plus one product offered, I repeat, equals one product paid for.

Think carefully about this because in my example, the product offered, even though it has a physical reality, is equal to 0 even though it has, and therefore has, an implicit value.

There are two physical objects, but only one unit of transactional value. The product offered, although physically existing, has zero "market value" in this operation. This is consistent with my previous reasoning: 0 is a real value in certain systems, here economic.

a/ Logical implication: I therefore question the basic mathematical axiom 1 + 1 = 2, by showing that everything depends on the reference system (value, perception, cost, energy, etc.). According to my reasoning: 1 + 1 = 1 because 1 + 0 = 1, and the second "1" is a value = 0 in this context.

The implications are multiple: depending on the semantic context (legal, economic, physical, etc.), a path towards contextual mathematics, where meaning precedes the operation.

Note:

It is often overlooked that: the place of 0 according to current conventions is before 1 on the number line:

By convention, we write: ... -2, -1, 0, 1, 2, ... Therefore, 0 precedes 1 in position on the axis. But this position is purely linear and formal, not semantic. But in logic (and in many real-life cases): 0 only has "value" in relation to what it accompanies or follows.

Examples:

10 = 1 + a power of 0, so 0 gives value to 1

b/ Following is a logic and the consequences for the axioms:

I thus break with a universally accepted axiom:

that addition = accumulation. In my model: Addition = transformation of meaning according to the frame of reference. Therefore, we will note the fundamental role of 0 as a real contextual value, not equal to nothingness.

Let's continue:

1 + 1 = 1 and therefore 1 + 0 = 2. Why? Well, 1 product (purchased) + 1 product (given) (therefore zeroed) = 1 (paid); So we can write 1 + 0 = 2, or 1 product (purchased) + (1 product offered, therefore zeroed, therefore = 0) = 1 product (paid), but here it's equal to 2 and not 1 because there are 2 products (obtained), hence: 1 + 1 = 1 and 1 + 0 = 2

I'm therefore trying to demonstrate a structured logical sequence, with the two equations at the heart of my reasoning:

1 + 1 = 1 (value)

1 + 0 = 2 (reality obtained)

image 6

On the one hand, in economics: we evaluate what was paid for. On the other hand, in physics: we observe what was actually obtained. But let's continue:

1 + 1 = 1 and 1 + 0 = 2, so 2 - 0 = 1, whereas in classical arithmetic where 2 - 0 = 2

Here's the second subtraction: we start from 2 - 0 = 1

The 0 here, zeroed, is actually 1, so we have 2 - 1 = 1

We wrote, so we can also say 2 - 2 = 0. Why? Because in fact, here it's a shortcut: 2 products - 0 products (but the zero is actually zeroed, as we said, so in fact 0 = 1, okay? If 0 = 1, then we have 1 = 1, and therefore we have two units in the end, thanks to the equality created: 1 and 1, or 1 + 1, which is 2 units, so in the end, we can still write 2 - 2 = 0). We fall back on classical mathematical theory, only partially.

I value zero: it's not worthless; it represents something given, invisible, implicit. What is perceived is not necessarily what has value. What has no value can, however, produce an effect (e.g., 0 = 1 in certain situations). We can still speak of contextual rules with implicit value, giving 0 an operative part.

A small clarification: according to my system, 0 is therefore prime, isn't that true?

I show that 0 acts, that it has a real influence on the resulting reality (e.g., 1 + 0 = 2). Therefore, a new possible definition of the prime number: A prime number is a stable integer, an indivisible source, which generates values ​​without being decomposable into factors other than itself and the base unit (according to the contextual logic of the system)."

So in your system:

0 is indivisible (division by 0 does not fragment, it restores unity),

0 is stable (it creates a reference),

0 is fundamental,

0 is not composite, and acts as an origin.

Conclusion:

In my alternative system:

0 is a prime number. And perhaps even... the most fundamental.

Author:

Vidal Bravo - Jandia Miguel

Engineer - Master II in Law

Paris II / Panthéon - Assas

UFR de Montpellier I - Center for Consumer Law