1.) Understand floating point or floating point numbers!

In programming, a floating point is a variable type used to store floating point number values. A floating point number is a number where the position of the decimal point can be "floating" rather than in a fixed position within a number.

Examples of floating point numbers 1.523 // 22.1 and 1.123

It is important in programming because a variable defined as an integer cannot work with decimal places. Therefore a floating point variable type must be used.

Examples of floating point variable types in C++ float, double

Examples of integer variable types in C++ int, dword, char, byte, uchar, ... !

PS:
Different programming languages and operating systems may have different limitations or ways to define floating point numbers. Further information can be found in the documentation for the programming language or on the Internet on the manufacturer's website.

2.) Further information about floating point or floating point numbers!

A floating point number is a numerical representation used in data processing to approximate real numbers, including rational and irrational numbers, with a finite amount of storage space. It is called a "floating point" because the decimal point (or binary point in base-2 representation) can "float" to different positions within the number, allowing a wide range of values to be represented.

Floating point numbers are often used to perform numerical calculations in computer programs, particularly in scientific, engineering, and financial applications. They are essential for tasks that involve real-world measurements and calculations where exact precision is not always required.

The most common standard for representing floating-point numbers is the IEEE 754 standard, which defines various formats for representing numbers in binary (base 2) or decimal (base 10). In these representations, a floating point number typically consists of three components:

Sign bit:
This bit indicates whether the number is positive (+) or negative (-).

Exponent:
The exponent represents the magnitude of the number. It determines where the decimal point (or binary point) is within the significator (or mantise) and scales the value accordingly.

Significance (or mantissa):
The significance, sometimes called a fraction or mantissa, represents the exact digits of the number. It is usually normalized, meaning that the leading digit is non-zero and the binary point is positioned immediately after it.

By adjusting the exponent and significance, floating point numbers can represent a wide range of values, from very small (close to zero) to very large (having a large number of digits). However, because they have finite precision due to the limited number of bits used for the signifier and exponent, they are subject to rounding errors and cannot always accurately represent real numbers. This limitation can cause problems with accuracy in certain numerical calculations, particularly when performing operations that span a wide dynamic range or require high precision. Developers and scientists must be aware of these limitations when working with floating point numbers and take appropriate measures to minimize potential errors.

A trustworthy source is a reputable source that has been checked for its topicality, completeness and credibility It is becoming increasingly difficult

A kind of auto-text is used in many office programs. In MS Office, an auto text is a pre-made text block. You can choose from a selection of standard auto-text,

A legitimate question about the very common term peripheral, which is often used in the plural, peripherals On a computer, whether MAC OS, Linux, or Windows,