This is actually a program that I made in response to an extra credit assignment during my C++ programming class. It was made within the constraints of the lessons that we had learned at the time. After a bit of refinement, it actually work as expected. Only two things are missing from it: inline documentation and the ability to save the prime numbers as a list.
One of the things that might be apparent to the discerning eye is that this program will actually test numbers that aren't prime (including numbers that are multiples and factors of primes). This isn't much of a concern. With the use of the multiplicity ("P"^"M"), those numbers that are composites of a prime will automatically fail the even divisibility test. "P" will just keep increasing till is gets to a number that evenly divides "R". Since all other composites (both lesser and greater than "P") are taken care of by dividing "P" to the power of a multiplicity ("M"), it really wouldn't matter whether "P" is prime or not.
Any number that can't evenly divide "R" falls through the sieve and the next number is tested. Only when "P" is prime and evenly divides "R" (with respect to a multiplicity "M") will the number matter and will catch to reduce "R". This makes sure that prime numbers, and only prime numbers, reduce "R" till it equals 1 so that the program can terminate.
Even with such testing, the runtime of the algorithm would be something of the order of... Where c is the constant that represent the amount of operations that are required to test "P", even when it's not prime, for the input number n (the "N" in the program). I may be wrong in my assessment (I would really love for someone to proven me wrong), but I believe this means that my program represents an algorithm that can conduct prime factorization in polynomial time. Slightly modified, it could work with larger numbers. Even difficult composite numbers that are composed by co-primes would fall sway to it.
Which is the main reason for me posting it here. I would love for someone to prove me wrong or help me in figuring out how to better allow it to handle larger numbers without taking too much away from the algorithm itself.
This is the program. It makes much use (or rather abuses heavily) casting to properly deal with the numbers it has to work with. The only Achilles' heel to the entire program is the limitation of numerical precision. It tends to fall apart (or just plain takes too long for my taste) large numbers. This however is only a limitation of the computer's hardware and not of the program itself.
#include <iostream>#include <cmath>usingnamespacestd;
intmain(){
char check = 'y';
while (check == 'y') {
int P = 2; int D = 0; int M = 0; int R = 0; int N = 0;
cout << "Enter a number:";
cin >> N;
cout << endl;
R = N;
while ((P < (N + 1)) && (R >= P)) {
M = 0;
if (((R % int(pow(double(P), double(M + 1)))) == 0) && ((R / int(pow(double(P), double(M + 1)))) >= 1)) {
while (((R % int(pow(double(P), double(M + 1)))) == 0) && ((R / int(pow(double(P), double(M + 1)))) >= 1)) {
M++ ; D++;
}
R = (R / int(pow(double(P), double(M))));
cout << P << " is a factor of " << N << " with a multiplicity of " << M << ". \n";
}
P++;
}
if (D == 1) {
cout << "Which means " << N << " is a prime number. \n" ;
}
cout << "Again (y/n)?" ;
cin >> check ;
cout << endl;
}
return0;
}
The program works by using a combination of the sieve of Eratosthenes and trial division. After a bit of initialization, it begins by coping the number to variable "R". "P" is the current prime number and "M" is the multiplicity of the prime (P^M) that will be checked to determine if it can evenly divide "R". If it can, "M" (for multiplicity) is increased one and the test is repeated until it fails. Once it does, the actual division takes place to reduce "R" to a lesser number.
If "P" fails to evenly divide "R" with the multiplicity of one ("M" = 1), then "P" is increased by one and the whole test repeats till "R" is reduced to 1. If the division is successful, the prime number ("P") and its multiplicity ("M") is displayed. If the division failed, "P" is still increased by one to test the next number for division.
After many successive tests and divisions, "R" will eventually reduce to 0 and the program ends. The program also has a flag "D" that if it's less than two (it increases with every increase of "P" and/or "M") will conclude the program with the display that the number given to it is, in fact, prime.
A linear check of all numbers p up to n is not a polynomial-time factorization. You could skip the exponents and just check n % p from 2 to sqrt(n) and it would be faster, and it would still not be polynomial time.
In the context of factoring integers, it's polynomial time if you can express the worst case runtime in terms of a polynomial accepting the number of bits of the input. ie, factoring a 64 bit number should only take 2x longer than a 32 bit number if your complexity is O(n), or 4x longer if your complexity is O(n^2). In your algorithm's case it would take 2^32x longer.
A linear check of all numbers p up to n is not a polynomial-time factorization. You could skip the exponents and just check n % p from 2 to sqrt(n) and it would be faster, and it would still not be polynomial time.
In the context of factoring integers, it's polynomial time if you can express the worst case runtime in terms of a polynomial accepting the number of bits of the input. ie, factoring a 64 bit number should only take 2x longer than a 32 bit number if your complexity is O(n), or 4x longer if your complexity is O(n^2). In your algorithm's case it would take 2^32x longer.