{VERSION 3 0 "SGI MIPS UNIX" "3.0" } {USTYLETAB {CSTYLE "Maple Input" -1 0 "Courier" 0 1 255 0 0 1 0 1 0 0 1 0 0 0 0 }{CSTYLE "" -1 256 "" 1 24 0 0 0 0 0 0 0 0 0 0 0 0 0 } {CSTYLE "" -1 257 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 }{CSTYLE "" -1 258 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 }{CSTYLE "" -1 259 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 }{CSTYLE "" -1 260 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 } {CSTYLE "" -1 261 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 }{CSTYLE "" -1 262 "" 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 }{PSTYLE "Normal" -1 0 1 {CSTYLE "" -1 -1 "" 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 }0 0 0 -1 -1 -1 0 0 0 0 0 0 -1 0 }} {SECT 0 {EXCHG {PARA 0 "" 0 "" {TEXT 256 29 "FFT (Fast Fourier Transfo rm)." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 42 "E valuation of a polynomial at some points." }}}{EXCHG {PARA 0 "> " 0 " " {MPLTEXT 1 0 5 "N:=6;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 32 " f:=randpoly(x,degree=N-1,dense);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 22 "fh:=convert(f,horner);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 13 "subs(x=3,fh);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 13 "subs(x=4,fh);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 298 "If we h ave a polynomial f of degree < N, then we can evaluate it at a point x =P with O(N) additions and O(N) multiplications if we first bring the \+ polynomial in Horner form. Now we will consider a polynomial of degree " 0 "" {MPLTEXT 1 0 34 "Points:=[seq (ithprime(i),i=1..N)];" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 161 "To eva luate f at all of these points would cost: there are O(N) points, and \+ each point costs O(N), so it costs O(N^2) operations (additions and mu ltiplications)." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 291 "The notation O(N) means: a function which grows as fast \+ (or at most as fast) as the function N. And O(N^2) means: a function t hat grows as fast (or at most as fast) as N^2. Any function f(N) = a*N ^2 + b*N + c grows as fast as a quadratic funciton, so any such functi on is said to be: O(N^2)." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 85 "Values:=[seq(subs(x=i,fh),i=Points)];\n# If N=nops(Points) then th is takes time O(N^2)" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 112 "Now we w ill keep Points the same for a while, and we'll consider how the Value s depend on the coefficients of f." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 37 "Coeffs:=[seq(coeff(f,x,i),i=0..N-1)];" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 32 "A:=linalg[vandermonde]([p,q,r]);" } }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 31 "A:=linalg[vandermonde](Poi nts);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 19 "evalm(A &* Coeffs) ;" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 90 "So, we can get the Values at the Points from the Coeffs by a linear map A, where A is the " } {TEXT 257 18 "Vandermonde matrix" }{TEXT -1 15 " of the Points." }} {PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 30 "The matri x A is the map that, " }{TEXT 258 37 "given the vector of coefficients of f" }{TEXT -1 5 ", it " }{TEXT 259 35 "produces the vector of value s of f " }{TEXT -1 20 "at the given Points." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 27 "We'll think of matrix A as " } {TEXT 260 29 "evaluation of f at the points" }{TEXT -1 1 "." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 9 "Then the " } {TEXT 261 12 "inverse of A" }{TEXT -1 21 " is then of course: \"" } {TEXT 262 27 "interpolation at the points" }{TEXT -1 176 "\". For exam ple, if a polynomial g of degree " 0 "" {MPLTEXT 1 0 93 "Values:=[seq(i^2,i=1..N)];\n# Suppose we search for a polynomial g that takes on these values:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 127 "B:=evalm(A^(-1)):\n# A transforms coefficients -> values\n# an d so B = A^(-1) transforms values -> coefficients " }} }{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 27 "Coeffs:=evalm(B &* Values); " }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 92 "g:=add(Coeffs[i+1]*x^i, i=0..N-1);\n# This g takes values [1,4,9,..] at the points [2,3,5,...] " }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 16 "lets check that:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 28 "[seq(subs(x=i,g),i=Points)];" }}} {EXCHG {PARA 0 "" 0 "" {TEXT -1 112 "So, if we keep the Points fixed, \+ and we've computed A and B:=A^(-1) then we can evaluate at the points \+ by doing:" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 27 "Values:=evalm(A &* Coeffs);" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }} {PARA 0 "" 0 "" {TEXT -1 162 "And, when we are given the Values at the Points, then we can interpolate, i.e. find the corresponding polynomi al that takes these values at those points by doing:" }}{PARA 0 "" 0 " " {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 27 "Coeffs:=evalm(B &* Valu es);" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 264 " When A and B are known then this matrix*vector product costs O(N^2) op erations It seems you can not do it faster: after all, each matrix co ntains N^2 entries, so whenever you do anything with all entries, the \+ computational cost must be at least O(N^2) operations." }}{PARA 0 "" 0 "" {TEXT -1 278 "So, we are lead to believe that evaluating or inter polating a polynomial of degree < N at N points costs O(N^2) operation s. However, when the Points are chosen in a very special way, we will \+ see that it can be done faster! That is what is called FFT, the Fast F ourier Transform." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 26 "The idea is the following:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 70 "f_even := add(coeff(f,x,2*i)*x^(2*i),i=0..iquo(N,2)); \n# the even terms" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 72 "f_odd := add(coeff(f,x,2*i+1)*x^(2*i+1),i=0..iquo(N,2));\n# the odd terms" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 11 "f0:=f_even;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 20 "f1:=expand(f_odd/x);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 62 "F0:=rem(f0,x^2-X,x);\n# F0 = even t erms, with x^2 replaced by X" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 79 "F1:=rem(f1,x^2-X,x);\n# F1 = odd terms, divided by x, and then x ^2 replaced by X" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 266 "Now we have \+ to evaluate f_even and f_odd at the points Points=[p1,p2,...], then ad d them, and we get the values of f at those points. We're almost done \+ once we have computed the values of f0 and f1 at those points (the val ues of f1 still need to be multiplied by x)." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 80 "Now f0 and f1 are even function s, which means: f0(p) = f0(-p) for any number p." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 271 "Lets assume for now that N is even, N=2*n, and that the set \{ p.(n+1)..p.(2*n) \} is the set \+ you get from \{p.1 .. p.n\} when you multiply everything by -1 (note: this does not hold for the Points we took in our example above, so we must make a different choice of points)." }}{PARA 0 "" 0 "" {TEXT -1 85 "In that case we only need to evaluate f0 and f1 at half of the poi nts, namely p1..pn." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 " " {TEXT -1 251 "Evaluating F0 and F1 at the points p1^2 ... pn^2 will \+ give us the same result as evaluating f0 and f1 at p1..pn. But: the \+ degrees of F0,F1 are smaller than the degree of f, and so this is chea per than evaluating f at all points p1..pN (where N=2n)." }}{PARA 0 " " 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 39 "So the idea to eva luate f at p1..pN is:" }}{PARA 0 "" 0 "" {TEXT -1 56 "n = N/2 and p.( n+1)..pN is the same as -1 times p1..pn." }}{PARA 0 "" 0 "" {TEXT -1 15 "Compute F0, F1." }}{PARA 0 "" 0 "" {TEXT -1 32 "Evaluate F0, F1 at p1^2 .. pn^2." }}{PARA 0 "" 0 "" {TEXT -1 43 "Now we know the values \+ of f0. f1 at p1..pn." }}{PARA 0 "" 0 "" {TEXT -1 43 "Now we know the v alues of f_even at p1..pn." }}{PARA 0 "" 0 "" {TEXT -1 62 "For each p \+ in p1..pn, multiply f1(p) by p to obtain f_odd(p)." }}{PARA 0 "" 0 " " {TEXT -1 43 "Then we know the values of f_odd at p1..pn." }}{PARA 0 "" 0 "" {TEXT -1 48 "Now we f_even(p) and f_odd(p) for all p=p1..pn. " }}{PARA 0 "" 0 "" {TEXT -1 77 "Then we also know the values of f_eve n(p) and f_odd(p) for all p=p.(n+1)..pN," }}{PARA 0 "" 0 "" {TEXT -1 38 "because p.(n+1)..pN = -p1, -p2, .. -pn" }}{PARA 0 "" 0 "" {TEXT -1 3 "and" }}{PARA 0 "" 0 "" {TEXT -1 28 " f_even(-p) = f_even(p) " }}{PARA 0 "" 0 "" {TEXT -1 3 "and" }}{PARA 0 "" 0 "" {TEXT -1 29 " \+ f_odd(-p) = -f_odd(p)." }}{PARA 0 "" 0 "" {TEXT -1 77 "Since f = \+ f_odd + f_even, we can use this to calculate f(p) for all p=p1..pN." } }{PARA 0 "" 0 "" {TEXT -1 40 "We gain almost a factor of 2 in speedup. " }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 398 "To g ain more than a factor of 2 in speedup, we should be able to do the sa me trick again (and again and again...), we should be able to evaluate F0 and F1 very fast at the points p1^2 .. pn^2. In order to do this f ast, we want that again n is a multiple of 2, say n=2*m, and that the \+ second half of the points equals -1 times the first half of the points . This can be achieved in the following way:" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 123 "Let N be a power of 2. Let ome ga be a primitive root of unity, so N is the smallest integer such tha t omega^N=1. Then take:" }}{PARA 0 "" 0 "" {TEXT -1 39 " Points := [s eq( omega^i , i=0..N-1)];" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 " " 0 "" {TEXT -1 125 "By doing the tricks mentioned above, a polynomial f of degree " 0 " " {MPLTEXT 1 0 628 "FFT:=proc(f, omega, N,x)\n # f must have degree " 0 "" {MPLTEXT 1 0 5 "N:=8;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 28 "om ega:=evalf(exp(2*Pi*I/N));" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 32 "f:=randpoly(x,degree=N-1,dense);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 17 "FFT(f,omega,N,x);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 76 "If all is well, then this the values of f evaluated at the foll owing points:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 32 "Points:=[s eq(omega^i,i=0..N-1)];" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 16 "lets ch eck that:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 28 "[seq(subs(x=i, f),i=Points)];" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 150 "inverse_ FFT:=proc(L, omega, N, x)\n local i,Coeffs;\n Coeffs:=FFT(add(L[i]*x ^(i-1),i=1..N), 1/omega, N, x);\n 1/N * add(Coeffs[i]*x^(i-1),i=1..N) \nend;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 2 "f;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 17 "FFT(f,omega,N,x);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 25 "inverse_FFT(%,omega,N,x);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 11 "evalf(%-f);" }}}{EXCHG {PARA 0 "> \+ " 0 "" {MPLTEXT 1 0 47 "`The error is`=max(op(map(abs,\{coeffs(%,x)\}) ));" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 311 "We see that computing an \+ inverse FFT is the same as computing an FFT with 1/omega instead of om ega, and you have to divide the result by N. The reason that this work s is the following. Recall that FFT is evaluation at Points=[omega^0,o mega^1,...], and that this corresponds to the linear map given by the \+ matrix" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 57 "A=vandermonde(Points)=vandermonde([omega^0,omega^1,...])." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 136 "Now inverse_FFT \+ is interpolation and corresponds to A^(-1). It is not hard to show tha t if omega is a primitive N'th root of unity then:" }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 129 "A^(-1) = 1/N * vandermon de([ (1/omega)^0, (1/omega)^1, ... ]) (for a proof see the book, but first try to prove this yourself!)" }}{PARA 0 "" 0 "" {TEXT -1 0 "" } }{PARA 0 "" 0 "" {TEXT -1 94 "This formula for A^(-1) tells you precis ely how to do the inverse FFT. Let's do some examples." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 31 "f:=randpoly(x,degree=50,dense);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 31 "g:=randpoly(x,degree=70,dense);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 7 "N:=128;" }}}{EXCHG {PARA 0 "> " 0 " " {MPLTEXT 1 0 40 "Digits:=15; # set accuracy at 15 digits." }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 30 "omega:=evalf( exp(2*Pi*I/N) \+ );" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 29 "values_f := FFT(f,ome ga,N,x):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 29 "values_g := FFT (g,omega,N,x):" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 80 "These are the v alues of f and g at the Points [omega^0,omega^1,...,omega^(N-1)]." }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 51 "values_pr := [seq(values_f[i ]*values_g[i],i=1..N)]:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 37 " pr:=inverse_FFT(values_pr,omega,N,x):" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 102 "Now pr should be approximately equal to f*g. Lets comput e the largest error in the coefficients of pr." }}}{EXCHG {PARA 0 "> \+ " 0 "" {MPLTEXT 1 0 43 "max(op(map(abs,[coeffs(expand(pr-f*g))])));" } }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{EXCHG {PARA 0 "" 0 " " {TEXT -1 190 "Note that we do not need such high accuracy, as long a s we are certain that the errors are smaller than 0.5 then we can alwa ys obtain the product f*g from pr by rounding of the coefficients." }} }{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 33 "Digits:=10; # the default \+ value." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 32 "f:=randpoly(x,deg ree=250,dense):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 32 "g:=randp oly(x,degree=250,dense):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 7 " N:=512;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 30 "omega:=evalf( ex p(2*Pi*I/N) );" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 27 "values_f: =FFT(f,omega,N,x):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 27 "value s_g:=FFT(g,omega,N,x):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 49 "v alues_pr:=[seq(values_f[i]*values_g[i],i=1..N)]:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 37 "pr:=inverse_FFT(values_pr,omega,N,x):" } {TEXT -1 0 "" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 187 "The following co mmand applies the command round (=round to nearest integer) to all coe fficients, and since our approximation of the product has errors < 1/2 we will get the exact product." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 24 "PR:=collect(pr,x,round):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 26 "evalb( PR = expand(f*g) );" }}}{EXCHG {PARA 0 "" 0 " " {TEXT -1 270 "Maple also has the commands FFT and iFFT (inverse FFT) . You can combine these commands with the Maple command \"evalhf\" (EV ALuate with Hardware Floatinging point computations). Then the computa tion will use machine floating points and the computation will be much faster." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 815 "We have seen that FFT can be used for multiplying polynomials in \+ Z[x]. The complexity is better than the O(N^2) for the usual multiplic ation or the O(N^(log(3)/log(2))) for Karatsuba multiplication. So eve ntually, if the degree is high enough, the FFT multiplication method s hould be faster than Karatsuba. The difficulty in making FFT fast is i n the number omega. The integers Z or rational numbers Q do not contai n primitive N'th roots of unity for N>2. Therefore we need to compute \+ not in Z or Q but in a field that does contain these N'th roots, such \+ as the complex numbers C, approximated by floating point numbers. To g et a complete implementation of multiplication in Z[x] based on FFT is not easy, one needs know for example with how many Digits the computa tion should be done for the result to be reliable." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 110 "Now C is not the only fi eld that contains N'th roots of unity where N is a large power of 2. T ake for example:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 10 "p:=2^16 +1;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 11 "isprime(p);" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 29 "N:=2; prim_root[N]:=-1 mod p ;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 44 "v:=( Factors(x^2-prim _root[N]) mod p )[2];" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 352 " while nops(v)=2 do\n # Apparantly prim_root[N] is a square in Fp, so \n # we can compute a square root using the Factors mod p\n # comm and.\n N:=N*2;\n # Pick the smallest root alpha (the other one is \+ -alpha\n # which is equivalent to p-alpha).\n prim_root[N] := min( x-v[1][1],x-v[2][1]) mod p;\n v:=( Factors(x^2-prim_root[N]) mod p \+ )[2];\nod;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 2 "N;" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 20 "omega:=prim_root[N];" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 23 "Power(omega,N/2) mod p;" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 21 "Power(omega,N) mod p;" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 395 "FFT_modp:=proc(f, omega, N, x,p)\n local Even,Odd,Powers,i;\n if N=1 then RETURN([f]) fi;\n Eve n:=procname( add(coeff(f,x,2*i )*x^i,i=0..N/2),omega^2 mod p,N/2,x,p) ;\n Odd :=procname( add(coeff(f,x,2*i+1)*x^i,i=0..N/2),omega^2 mod p, N/2,x,p);\n Powers := powers_omega(omega,N,p);\n Odd := [seq(Odd[i]* Powers[i],i=1..N/2)];\n [seq(Even[i]+Odd[i],i=1..N/2),seq(Even[i]-Odd [i],i=1..N/2)] mod p\nend;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 106 "powers_omega:=proc(omega,N,P)\n local i;\n options remember;\n \+ [seq(Power(omega,i) mod p,i=0..N/2-1)]\nend;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 176 "inverse_FFT_modp:=proc(L, omega, N, x,p)\n loc al i,Coeffs;\n Coeffs:=FFT_modp(add(L[i]*x^(i-1),i=1..N), 1/omega mod p, N, x,p);\n 1/N * add(Coeffs[i]*x^(i-1),i=1..N) mod p\nend;" }}} {EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 38 "f:=randpoly(x,degree=200,den se) mod p:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 38 "g:=randpoly(x ,degree=300,dense) mod p:" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 122 "To \+ compute the product pr = f*g we need to have N=some power of 2 and N>d egree(pr,x). We can do that in the following way:" }}}{EXCHG {PARA 0 " > " 0 "" {MPLTEXT 1 0 59 "N:=2^ceil(simplify(log(degree(f,x)+degree(g, x)+1)/log(2)));" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 41 "values_f :=FFT_modp(f,prim_root[N],N,x,p):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 41 "values_g:=FFT_modp(g,prim_root[N],N,x,p):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 55 "values_pr:=[seq(values_f[i]*values_ g[i],i=1..N)] mod p:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 51 "pr: =inverse_FFT_modp(values_pr,prim_root[N],N,x,p):" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 43 "Expand(f*g - pr) mod p; # This should be 0." }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 345 "In principle the method sketch ed above is very fast. We still don't beat the speed of Expand mod p i n this example though. We probably have quite a bit of overhead in han dling our lists and polynomials. To make it faster one would have to i mplement this more carefully, avoiding the unnecessary construction of so many new polynomials and lists." }}{PARA 0 "" 0 "" {TEXT -1 0 "" } }{PARA 0 "" 0 "" {TEXT -1 841 "One other remark about the efficiency h ere is that all our integers are of the type longint, they're the type of integers that can get arbitrarily many digits. Integers that could be of arbitrary size are complicated datastructures, and therefore th ey are not as fast as the integers one has in a computer language such as C. In C and most other languages the integers have some a priori b ound (so they're not really integers, they're in fact integers modulo \+ some power of 2), making them much easier (read: faster) to handle. H owever, since we're computing modulo p we know a priori that the integ ers never get larger than p. So it would be better to use a faster dat atype for our integers in this code. Maple has a special datastructure for polynomials with coefficients modulo p, but the syntax is rather \+ awkward so I won't go into that." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }} {PARA 0 "" 0 "" {TEXT -1 134 "The conclusion is that with FFT we can i n principle multiply polynomials very fast, if you would implement it \+ in a language such as C." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 " " 0 "" {TEXT -1 172 "Note that multiplication in Z[x] and in Z are rel ated problems. I'll illustrate that with an example. Let's say we have some large number, and we write it out on base beta." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 11 "beta:=1000;" }}}{EXCHG {PARA 0 "> \+ " 0 "" {MPLTEXT 1 0 13 "A:=123456789;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 13 "B:=987654321;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 22 "FA:=123*x^2+456*x+789;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 22 "FB:=987*x^2+654*x+321;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 4 "A*B;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 14 "expand(FA*FB );" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 15 "subs(x=1000,%);" }}} {EXCHG {PARA 0 "" 0 "" {TEXT -1 93 "As you can see, the problem of mul tiplying A and B can be reduced to multiplying polynomials." }}{PARA 0 "" 0 "" {TEXT -1 0 "" }}{PARA 0 "" 0 "" {TEXT -1 90 "The converse is also true, you can reduce multiplying polynomials to multiplying inte gers." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 31 "f:=12*x^3 + 23*x^2 + 34*x + 45;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 31 "g:=54*x^3 \+ + 43*x^2 + 32*x + 21;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 21 "If := 12002300340045;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 21 "Ig : = 54004300320021;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 6 "If*Ig; " }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 20 "sort(expand(f*g),x);" } }}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}{EXCHG {PARA 0 "" 0 " " {TEXT -1 339 "So if you have an asymptotically fast algorithm for mu ltiplication in Z[x] (i.e. an algorithm with a good complexity O(f(n )) where f(n) is a function that grows slowly) then you have an asympt otically fast algorithm for multiplication in Z, and vice versa. The F FT method for multiplying integers is called the Schonhage-Strassen me thod." }}}}{MARK "113" 0 }{VIEWOPTS 1 1 0 1 1 1803 }