Solving Large System of Linear Equation Using Successive ... · Direct method requires the use of...

16
Applied Mathematics and Computational Intelligence Volume 6, 2017 [83-98] Solving Large System of Linear Equation Using Successive OverRelaxation, Conjugate Gradient and Preconditioned Conjugate Gradient Nurhanani Abu Bakar 1,a and Mohd Rivaie 1,b 1 Department of Computer Science and Mathematics, MARA University of Technology (UiTM) Terengganu, Campus Kuala Terengganu, Malaysia ABSTRACT System of linear equations is widely used especially in industries. Industrial mathematical applications usually involve large problem. System of linear equation can be solved using direct and indirect method. Direct method requires the use of inverse matrix to solve the problem. However, for large system of linear equation, finding an inverse could be difficult and time consuming. Therefore, indirect methods in the form of numerical calculation such as Successive OverRelaxation, Conjugate Gradient and Preconditioned Conjugate Gradient are used. The aim of this research is to compare the performance of those three methods to solve variety of system of linear equation from small scale to large scale in term of number of iteration and CPU time. Numerical results show that the Conjugate Gradient method is the best method for solving system of linear equation in terms of both number of iteration and CPU time. Above all, those three methods could be used to solve system of linear equations. Keywords: System of linear equation, Iterative method, Successive Over‐Relaxation, Conjugate Gradient, Preconditioned Conjugate Gradient 1. INTRODUCTION Matrix form of type ݔܣൌ is one of the topics that are commonly applied in many areas, like structural design, principal component analysis, exploration and remote sensing, biology, electricity, solid mechanics, molecular spectroscopy, structural dynamics, automatics control theory, vibration theory, and others (Konghua et al., 2007). Linear systems arise in real‐world for various applications. A system of linear equations is a collection of two or more linear equations involving the same set of unknowns (Gharib et al., 2015). The word "system" indicates that the equations are to be considered collectively rather than individually. A linear equation in unknowns is an equation of the form: ݔ ݔ ݔ where , ,…, and are real numbers and ݔ ݔ, ݔ,…, are variable (Leon, 2010). A linear system of equations in unknowns is then a system in the form: ଵଵ ݔ ଵଶ ݔ ݔ ଶଵ ݔ ଶଶ ݔ ݔ ݔ ݔ ݔ a [email protected], b [email protected]

Transcript of Solving Large System of Linear Equation Using Successive ... · Direct method requires the use of...

  • Applied Mathematics and Computational Intelligence Volume 6, 2017 [83-98]

    SolvingLargeSystemofLinearEquationUsingSuccessiveOver‐Relaxation,ConjugateGradientandPreconditionedConjugate

    Gradient

    NurhananiAbuBakar1,aandMohdRivaie1,b

    1DepartmentofComputerScienceandMathematics,MARAUniversityofTechnology(UiTM)Terengganu,CampusKualaTerengganu,Malaysia

    ABSTRACT

    Systemoflinearequationsiswidelyusedespeciallyinindustries.Industrialmathematicalapplicationsusuallyinvolvelargeproblem.Systemoflinearequationcanbesolvedusingdirectandindirectmethod.Directmethodrequirestheuseofinversematrixtosolvetheproblem.However,forlargesystemoflinearequation,findinganinversecouldbedifficultand time consuming. Therefore, indirectmethods in the form of numerical calculationsuch as Successive Over‐Relaxation, Conjugate Gradient and Preconditioned ConjugateGradientareused.Theaimofthisresearchistocomparetheperformanceofthosethreemethods to solvevarietyof systemof linearequation from small scale to large scale interm of number of iteration and CPU time.Numerical results show that the ConjugateGradientmethodisthebestmethodforsolvingsystemoflinearequationintermsofbothnumberof iterationandCPUtime.Aboveall,thosethreemethodscouldbeusedtosolvesystemoflinearequations.Keywords: System of linear equation, Iterative method, Successive Over‐Relaxation,ConjugateGradient,PreconditionedConjugateGradient

    1. INTRODUCTIONMatrixformoftype isoneofthetopicsthatarecommonlyappliedinmanyareas, likestructural design, principal component analysis, exploration and remote sensing, biology,electricity, solid mechanics, molecular spectroscopy, structural dynamics, automatics controltheory,vibrationtheory,andothers(Konghuaetal.,2007).Linearsystemsarise inreal‐worldfor various applications. A system of linear equations is a collection of two or more linearequations involving the same set of unknowns (Gharib et al., 2015). The word "system"indicatesthattheequationsaretobeconsideredcollectivelyratherthanindividually.Alinearequationin unknownsisanequationoftheform:

    where , , … , and arerealnumbersand , , … , arevariable(Leon,2010).Alinearsystemof equationsin unknownsisthenasystemintheform:

    ⋯ ⋯ ⋮⋯

    a [email protected], b [email protected]

  • Nurhanani Abu Bakar and Mohd Rivaie / Solving large System of Linear Equation Using Successive…

    84

    where the ’s and the ’s are all real numbers. Commonly, system of linear equation isconvertedtomatrix formof type (Shastrietal.,2015).Consideringasystemof linearequationsofthreevariables

    Thematrixformoftype willbeasbelow:

    = From the abovematrix form, the left‐hand sidematrix or the coefficientmatrix refers to thematrix .Whereas,theright‐handsidematrixrefertothematrix .Then,thevalueof , andrepresentthesolutionofthelinearsystem.

    ,

    In fact,manyindustriesare facingwithproblemofhighdimensionalsystem.Despitethat, theproblem can be solved by using iterative methods as it may take less time and lower costcompare to direct solver. System of linear equations is widely used especially in industries.System of linear equation can be solved using direct and indirectmethod. Some researcherssuchasKacamargaetal.,(2014),Jamil(2012)andShastrietal.,(2015)studiedthecomparisonsbetweendirectandindirectmethods.Theymentionedthattheindirectmethodsbecomemoreefficientthanthedirectmethod.Inthisresearch,SuccessiveOver‐Relaxation(SOR),ConjugateGradient (CG) and Preconditioned Conjugate Gradient (PCG)method are chosen to solve thesystemoflinearequations.2. DIRECTMETHODS Thedirectmethodsofsolvinglinearequationsareknowntohavetheirdifficultieswhenusingmanualandcomputercalculation.Forinstance,theproblemwithGausseliminationissensitivetoroundingerror(Kalambi,2008).Generally,thesolutionisobtainedafterasingleapplicationofGaussianelimination.Thiscanbeaproblembecauseonceasolutionisobtained,thereisnoimprovementcanbedonetothesolution.Besides,solvinglargesystemofequationisaburdento the computer in termof computational timedue to thehighmemory storage.Thismakeadirectmethodtoodifficulttobeusedasitneedlotsofsteptobecarriedout.Moreimportantly,theoperationscostofGaussianelimination is toohighformost largesystemsdueto thehighcomputationalmemoryrequirement.3. LITERATUREREVIEW System of linear equations are collections of linear equations that consist of many linearequations with the same variables (Jamil, 2012). The system might consist many unknownvariables which must be solved simultaneously in the form of . There are manymethodsproposedbyotherresearcherthatstudiedaboutthissystem.Rodríguez‐Alvarezetal.(2010)haveappliedthissysteminimagereconstructionandproposedtheQRdecompositionto

  • Applied Mathematics and Computational Intelligence Volume 6, 2017 [83-98]

    85

    solve the linearsystem.Refer toLiu (2011), thedirectmethodssuchasGauss‐JordanandLUdecomposition are slower than Jacobi, Richardson and Gauss‐Seidel which are known asiterativemethods.Ontheotherhand,HomotropyPerturbationmethodshowedthebestresultsintermoferrorsandcomputationaltimes.According to Atasoy et al., (2012), the system of linear equations can be applied in manyscientific andengineeringproblems.The solutionofLinearCircuitEquationSystem(LCES) issolvedbyusingGauss‐JordanEliminationMethodandexaminedintermsofCPUtime.Besides,the study about classical iterative refinement (IR) and k‐fold iterative refinement (RIR) forsolving linear equation in the form of has shown that RIR is very stable and robustcomparedwith IRwhich is effective but not stable enough (Smoktunowicz& Smoktunowicz,2013).Linearsystem withmatrix isacirculantmatrixnormallyusedtofindnumericalsolution for elliptic and parabolic partial differential equation. Fuyong (2014) stated that thecirculantmatrices are commonly used to solve problem inmany areas for examples physics,electromagnetic,signalprocessing,statisticsandappliedmathematics.In contrast, Dimov et al., (2015) have proposed a newWalk on Equation (WE)Monte Carloalgorithmtosolvesystemoflinearalgebraic(LA)equations.ThecomparisonofWEMonteCarloandPreconditionedConjugate(PCG)methodshowedthatconvergence forWEMonteCarlo isbetter.Furthermore,researchaboutillconditionedlinearsystemispresentedbyDouglasetal.,(2016). They have used Krylov subspace method to solve sparse matrix which has a largecondition number. Afterwards, Yeung et al., (2017) present a preconditioner to improve theconvergenceofsolutiontocomputeillconditionedsystem.TheseproblemsaresolvedbyusingGeneralizedMinimalResidual (GMRES)methodandmodifiedBiconjugateGradient Stabilized(MBiCG)method.Basedonthisliteraturereview,itcanbeseenthat,mostresearchersinterestedtosolvesystemof linear equations using direct and indirect methods. Therefore, this research focused onsolvingthesystemoflinearequationswithvarioustypeofmatrixusingthreedifferentiterativemethods.4. FUNDAMENTALOFITERATIVEMETHODS Iterativemethodcanbeusedtoapproximatesolutionsthatconvergerapidlyorslowly.Iterativemethod have become a famous method among scientific computing field to compute largenumberofunknownequations(Sickeletal.,2005). Iterativemethod isamathematicalsolverthatimprovesapproximateresultsforsomeproblems.AccordingtoCollignon(2011),therearemany types of iterativemethods. Jacobi, Gauss Seidel and SuccessiveOver‐relaxationMethod(SOR)methodarethebasiciterativetechniquesforsolvinglinearsystems(Saad,2003).Someofthe iterativemethods such as Jacobi and Gauss‐Seidel could be used with some success butmainlybecauseofthelimitedamountofmemorythatisrequired(Saad&VanDerVost,2000).In practical computations, it is shown that Gauss‐Seidel iteration converges slowly.Consequently,SORisusedtoaccelerateconvergencebytakingabiggerstepateachiteration.Theoretically,JacobiandGauss‐SeidelmethodareslowerthanSORtosolvelargelinearsystemwiththebestchosenvalueofextrapolationfactor(Kalambi,2008).TheSORmethodisdevisedbyapplyingextrapolationvalue to theGauss‐Seidelmethod. Inorder toaccelerate therateofconvergence, a suitable value of extrapolation factor, is selected. However, there are stilldisadvantages of SORmethod since the convergence of the solution depend on the value ofextrapolationfactordenotedas .TheConjugateGradient(CG)isefficientforsolvingsystemsoflinearequations(Lietal.,2008).TheCGmethodalso canbeused forminimizationand it is related toaminimizationmethod

  • Nurhanani Abu Bakar and Mohd Rivaie / Solving large System of Linear Equation Using Successive…

    86

    known as steepest descent. For problems of large dimension, and the matrix is ‘‘ill‐conditioned’’,itisoftenpossibletoimprovetheperformanceoftheconjugategradientmethodbyusing a technique known as preconditioning.Amatrix is ill‐conditioned if its determinantapproacheszeroandmaysufferwithnumericalerrors.Insomecase,wheretheeigenvaluesareclose to the origin, the iterative methods tend to have many iteration before the stoppingcriteria ismet. This may cause the unstable system does not converge to the solution point(Douglasetal.,2016).Besides, an ill‐conditioned system can be determined based on its condition number. Thecondition number for a system can be expressed as , where and are thelargestandsmallesteigenvaluesof .Thehighvalueofconditionnumberisconsideredasill‐conditioned (Burden&Faires, 2011).ThePreconditionedConjugateGradient (PCG) is oneofthe most efficient methods for solving systems with symmetric positive definite coefficientmatrices(Arany,1999).5. SUCCESSIVEOVER‐RELAXATION(SOR)METHODSOR method is a generalization of and improvement on the Gauss‐Seidel (GS) Method. Thismethod is an extrapolation of the GS method which calculate the GS method’s iterates toaccelerateitsconvergence(Mittal,2014).ThesourceoftheSORmethodistheobservationthatGSconvergesmonotonically.Theapproximations stayonthesamesideofthesolution asiteration number increases. SOR method introduced the acceleration parameter orextrapolationfactordenotedas .ListedbelowistheformulaforSORmethod:

    1

    where,: ‐thiterationofx:Extrapolationfactor:refertotheelementofmatrix basedinrow, andcolumn, :refertotheelementofmatrix basedinrow, andcolumn, ThealgorithmbelowshowsthecalculationforstepsinSORmethod.Input :No.ofunknownsandequation

    :Entriesof , , 1… :Entriesof , 1…

    ErrorTolerance=10 , Output :Entriesof Algorithm1.StepinSORmethod

    Step1 Chooseaninitialguess 0 tothesolutionx.Fork=1,2,3,…, andFor

    i=1,2,…,NStep2 1 ω ω

    Step3 1 ω ω

  • Applied Mathematics and Computational Intelligence Volume 6, 2017 [83-98]

    87

    Step4 1 ω ω

    Step5 Repeatfor whichare iterationof and , numberofunknowns.Step6 RepeatthecalculationsuntiltolerancefactorareachieveExtrapolationfactor

    Extrapolation factor is the process of estimating, beyond the original observation range, thevalueofavariableonthebasisofitsrelationshipwithanothervariable.TheSORisdevisedbyapplying extrapolation to GS. In this extrapolation, the form of a weighted average is takenbetweenthepreviousiterateandthecurrentGSiteratesuccessivelyforeachcomponent(Chen,2014).

    ̅ 1

    where ̅ denotesaGSiterates for componentof ̅ ̅ , ̅ , … , and istheextrapolation factor. To accelerate the rate of convergence, a suitable value of is selected.Therearefivecasesofextrapolationfactor(Barrettetal.,1994).Table1showstheconditionsof foreachcases.

    Table1Conditionsofextrapolationfactor

    Cases Extrapolation factor ConditionI 1 SimplifiestotheGauss‐SeidelII 0 NoiterationIII 0 1 Under‐relaxationIV 1 2 Over‐relaxationV 2 Divergent

    Regrettably, there isnogeneral rule forselecting theoptimalvalueof therelaxation factor .Besides, some researcher such as Gismalla (2014) use 1.25 for value of over‐relaxationparameterwhichisthesamewithBurden&Faires(2011).Incontrast,accordingtoEiermann&Varga(1993), 1.88401wasusedasanextrapolationfactorinordertogetabetterresults.However,authorlikeYangetal.,(2005)andBarrettetal.,(1994)showedthat 1.20isthebestextrapolationparameter.Therefore, in thisproject, thevalueof extrapolations factorarevariedwhicharesetas0.50, 1.25and1.90.6. CONJUGATEGRADIENTMETHODThe Conjugate Gradient method (CG) is themethod designed for solving symmetric positivedefinite (SPD) linear system (Modersitzki, 1994). The conjugate gradient method can beobservedtheoreticallyasadirectmethod,asitproducestheexactsolutionafterafinitenumberofiterationswithseveralconditions(Ryaben’kii&Tsynkov,2006).Amazingly,CGmethodcanbeusedasaniterativemethodasitprovidesmonotonicallyimprovingapproximationsvalueoftotheexactsolution(O’Leary,1979).Basically,monotonecanbedividedintotwowhichareincreasing and decreasing. Monotone increasing is also known as a strictly increasing thatalwaysincreasingandneverremainsconstant.Meanwhile,monotonedecreasingisalsoknownas a strictly decreasing that always decreasing andnever remain constant (Weisstein, 2002).Monotonicconvergencecanberelatedwithperformancethatimproveseachiteration.Also,therateofconvergenceshowtheeffectivenessofsolutionpointtoconverge(O’Leary,1979).Then,

  • Nurhanani Abu Bakar and Mohd Rivaie / Solving large System of Linear Equation Using Successive…

    88

    CGmethodisknowntoovercomethisproblemwithoutneedinganyparameter.Researchhadshown that CG method is more effective due to the CPU times which is faster than Jacobi(Kacamargaetal.,2014).Inaddition,HestenesandStiefel(1952)statedthatCGmethodhastheadvantageswhereanimprovementcanbemadebytakingoneadditionalstep.ThisconcludedthatCGmethodisbetterthanJacobi,GaussSeidelandSORmethod.ThealgorithmbelowshowsthecalculationforstepsinCGmethod.Algorithm2.StepinCGmethod

    Step1 Let 0, , .For 1, 2, 3, …Step2 ,when 1Step3 Step4

    Step5

    Step6 Step7 Repeatstep2tostep6for iterationofStep8 Repeatthecalculationsuntiltolerancefactorareachieve7. PRECONDITIONEDCONJUGATEGRADIENTMETHODThe CG method is an excellent algorithm, being both extremely fast but also very easy toimplement. However, through numerical experiments, it turns out that a lot of times the CGmethodsimplydoesnotconvergeasfastasitshould(Bai&Wang,2006).Ifthematrix isillconditioned, the CG method may suffer from numerical errors (Benhamadou, 2007).Consequently, Preconditioning Conjugate Gradient method will be used to overcome thesensitivityofroundingoff.Additionally, slow convergence problem because of the system is ill‐conditioned. System oflinear system that has small conditionnumber is called aswell‐conditioned.Whereas, a highconditionnumberindicatesill‐conditioningsystem(Kreyszig,2010).Anill‐conditionedsystemcanbedeterminedbasedonhighconditionnumber.Theconditionnumberforasystemcanbeexpressedas .Itcanbeshownthatfortheconjugategradientmethod,thenumberofiterations required to reach convergence is proportional to the condition number.Preconditioningisatechniqueofreducingtheconditionnumberinordertoimprovetherateofconvergenceofaniterativeprocess(Benzi,2002).ThealgorithmbelowshowsthecalculationforstepsinPCGmethod.Algorithm3.StepinPCGmethod

    Step1 , .For 0,1,2,3, …Step2

  • Applied Mathematics and Computational Intelligence Volume 6, 2017 [83-98]

    89

    Step3

    Step4 Step5 Step6

    Step7

    Step8 Step9 Repeatstep2tostep9for iterationofStep10 Repeatthecalculationsuntiltolerancefactorareachieve8. STOPPINGCRITERIAThe stopping criteria for each method are the same. The stopping criterion of an iterativemethodistoiterateuntil:

    ‖ ‖ ԑ, 0This means that, if is an approximation to value of , then the absolute error is

    , andrelativeerror is ԑprovided that 0.Besides, someresearcher such asGismalla (2014) use 5 10 for tolerance factor.WhileBurden&Faires(2011) use 10 for their calculation that involve tolerance factor. In contrast, according toAparecido et al., (2014), they use 10 as tolerance factor in order to get a better results.Authors likeYangetal., (2005)andBarrettet al., (1994)showed that 10 isthebest stopping criteria.Therefore, in this research, thevalueof tolerance factor arevariedwhereԑissetas10 , 10 and10 .Thefollowingstoppingcriterionisused:

    ‖ ‖ ԑ9. DATAANDINITIALVALUESMatrix is created in five different dimension sizes which are 2 2, 3 3, 5 5, 10 10and20 20. Similarly, matrix are generated for five different dimension sizes which are2 1, 3 1, 5 1, 10 1and20 1.Fornon‐symmetricmatrixofmatrix ,onlythreedifferentdimension sizes involved which are 2 2, 3 3 and5 5. Also, for matrix , three differentdimensionsizeswhichare2 1, 3 1and5 1.Matrix isgeneratedforbothsymmetricandnon‐symmetric.Bothpositivedefiniteandnegativedefiniteofmatrixareusedinordertotesttheabilityofmethodstofindtheapproximatesolution.Thetwotypeofmatrixarechosenduetotheexistenceofrealnumberofeigenvalues.

  • Nurhanani Abu Bakar and Mohd Rivaie / Solving large System of Linear Equation Using Successive…

    90

    Initial point is also known as starting pointwhich indicates the starting point of calculation.Normally,initialpointwillbestartedatzero.However,inthisresearch,theinitialpointissetatfive different points as stated in the table of numerical results. The initial point are selectedfrompointclosetothesolutionpointtotheonefurthestaway.10. RESULTANALYSISInthisresearch,theresultwillbecomputedbyusingnormwiseabsoluteerror(Ipsen,2009).Errorwillbedefinedbyusingthefollowingformula:

    Absoluteerror ‖ ‖ where, 0 0:Exactvalue:Approximatevalue

    The solution from directmethod represents the exact valuewhile the results obtained fromiterativemethodswhichareSOR,CGandPCGmethodbecometheapproximatevalue.Then,therelative errors for each method are computed to determine which method has the smallesterror.Thenumberof repeating step in eachmethodare countedasnumberof iteration.Thenumberofiterationiscounteduntilthetolerancefactorisachieved.Themostefficientmethodthathas less iterationwithinthesamestoppingcriteriahadbeenidentifiedandconcluded.Incontrasttothehighestiteration,thismethodisconcludedaslessefficient.TherelatedmethodsaretestedusingthesamecomputerwiththesameprocessorthatisIntel®Core™i5‐7200UCPU@2.50GHz2.71GHzwithRAM4.00GB.ThesamecomputerwiththesameprocessorisusedbecausedifferentprocessorwillleadtothechangeableresultsofCPUtimes.Thetableinthefollowingsectionshowstheresultforsymmetricmatrixintwoaspects:numberof iterationsandCPUtimes.Eachaspectconsistsof twotypesofmatrixwhich ispositiveandnegative definite matrix. The positive and negative definite involves three different value oftolerancefactor.However,thetableshownonlyconsidersnumericalresultsoftolerancefactorof10 .This isbecause,10 is thebeststoppingcriteriawhichgivesmallnumberoferrors.Fordetailedresult,thesisAbuBakar(2017)canbereferred.Thus,thebesttypesofmatrixcanbedeterminedwiththehelpofperformanceprofileexceptforresultsoferror.TheSOR1,SOR2andSOR3representtheSOR: 0.5, 1.25and 1.9.10.1NumericalResultsBasedonErrorsThe numerical results of error for symmetric positive definite (SPD) matrix and symmetricnegative definite (SND)matrix are shown in Table 2 and Table 3. Theword ‘F’ refer to Fail,indicatingthatthemethodisunabletoconvergetothesolutionpoints.

    Table2NumericalresultsoferrorforSPD

    MATRIX INITIALPOINT SOR1 SOR2 SOR3 CG PCG

    2x2

    (0,0) 2.2858E‐09 9.0463E‐11 7.6583E‐10 0.0000E+00 0.0000E+00(10,10) 2.8930E‐09 1.0232E‐10 5.6666E‐10 9.9920E‐16 F

    (100,100) 2.3170E‐09 3.0499E‐10 5.9934E‐10 5.1167E‐14 F(‐10,‐10) 2.4081E‐09 9.4273E‐11 7.4621E‐10 1.0968E‐14 F(‐100,‐100) 2.6317E‐09 3.0868E‐10 6.1541E‐10 2.4394E‐14 F

  • Applied Mathematics and Computational Intelligence Volume 6, 2017 [83-98]

    91

    3x3

    (0,0,0) 6.1717E‐09 1.5961E‐10 1.2690E‐09 0.0000E+00 0.0000E+00(10,10,10) 6.3259E‐09 3.5915E‐11 9.1498E‐10 1.0408E‐14 F

    (100,100,100) 5.9009E‐09 8.0817E‐11 1.5100E‐09 3.0602E‐14 F(‐10,‐10,‐10) 6.1662E‐09 4.1962E‐11 1.0559E‐09 1.4753E‐14 F

    (‐100,‐100,‐100) 6.0622E‐09 7.8599E‐11 1.5371E‐09 5.7001E‐14 F

    5x5

    (0,..,0) 1.9699E‐09 5.5282E‐10 1.0266E‐09 8.5657E‐15 8.5657E‐15(10,..,10) 2.4291E‐09 8.2008E‐10 9.9461E‐10 9.4934E‐15 F(100,..,100) 2.4488E‐09 5.0525E‐10 1.0441E‐09 2.2374E‐14 F(‐10,,..,‐10) 2.1285E‐09 7.2564E‐10 9.7795E‐10 1.1782E‐14 F(‐100,..,‐100) 2.2598E‐09 4.9736E‐10 1.0411E‐09 1.6325E‐13 F

    10x10

    (0,..,0) 2.9464E‐09 2.3906E‐10 4.7908E‐10 8.8998E‐15 8.8998E‐15(10,..,10) 2.8818E‐09 1.5589E‐10 4.3769E‐10 8.9928E‐15 F(100,..,100) 2.6968E‐09 2.1506E‐10 4.7197E‐10 1.3591E‐14 F(‐10,,..,‐10) 3.1894E‐09 1.7464E‐10 4.7918E‐10 1.5973E‐15 F(‐100,..,‐100) 2.7243E‐09 2.1669E‐10 4.8020E‐10 1.4233E‐14 F

    20x20

    (0,..,0) 3.4008E‐09 1.6830E‐10 4.2795E‐10 7.2643E‐16 7.2643E‐16(10,..,10) 3.2825E‐09 2.2698E‐10 4.1166E‐10 1.0404E‐15 F(100,..,100) 3.4196E‐09 1.4390E‐10 4.3698E‐10 3.6069E‐14 F(‐10,,..,‐10) 3.4341E‐09 9.3534E‐11 4.7395E‐10 3.6066E‐15 F(‐100,..,‐100) 3.4351E‐09 1.4574E‐10 4.4668E‐10 1.1124E‐14 F

    TOTAL 1.6972E‐08 7.7846E‐10 2.1972E‐09 5.2566E‐14 7.2643E‐16

    Table3NumericalresultsoferrorforSND

    MATRIX INITIALPOINT SOR1 SOR2 SOR3 CG PCG

    2x2

    (0,0) 2.5716E‐09 3.2702E‐10 7.7981E‐10 0.0000E+00 0.0000E+00(10,10) 2.5173E‐09 9.1126E‐11 5.3844E‐10 1.4445E‐14 F

    (100,100) 2.6270E‐09 6.1599E‐11 7.3848E‐10 1.4445E‐14 F(‐10,‐10) 2.3054E‐09 6.7224E‐11 5.3902E‐10 0.0000E+00 F(‐100,‐100) 2.4641E‐09 5.8372E‐11 6.9563E‐10 1.0214E‐14 F

    3x3

    (0,0,0) 2.4190E‐09 8.7807E‐11 6.4794E‐10 5.4097E‐15 5.4097E‐15(10,10,10) 2.5760E‐09 1.4570E‐10 5.0380E‐10 5.4097E‐15 F

    (100,100,100) 2.3463E‐09 4.0106E‐11 7.0243E‐10 1.5568E‐14 F(‐10,‐10,‐10) 2.7608E‐09 1.4565E‐10 4.0919E‐10 5.4506E‐15 F

    (‐100,‐100,‐100) 2.1687E‐09 3.9620E‐11 4.7693E‐10 1.3879E‐14 F

    5x5

    (0,..,0) 1.9933E‐08 2.9626E‐09 4.5414E‐10 1.5384E‐14 1.5384E‐14(10,..,10) 2.0167E‐08 2.9540E‐09 4.3824E‐10 1.0228E‐13 F(100,..,100) 2.0440E‐08 2.8419E‐09 4.5012E‐10 1.1855E‐13 F(‐10,,..,‐10) 2.0444E‐08 2.8361E‐09 4.5672E‐10 3.0916E‐09 F(‐100,..,‐100) 1.9800E‐08 3.0916E‐09 4.5989E‐10 4.5358E‐14 F

  • Nurhanani Abu Bakar and Mohd Rivaie / Solving large System of Linear Equation Using Successive…

    92

    10x10

    (0,..,0) 2.9464E‐09 2.3906E‐10 4.7908E‐10 8.8998E‐15 8.8998E‐15(10,..,10) 3.1894E‐09 1.7464E‐10 4.7918E‐10 1.5973E‐15 F(100,..,100) 2.7243E‐09 2.1669E‐10 4.8020E‐10 1.4233E‐14 F(‐10,,..,‐10) 2.8818E‐09 1.5589E‐10 4.3769E‐10 8.9928E‐15 F(‐100,..,‐100) 2.6968E‐09 2.1506E‐10 4.7197E‐10 1.3591E‐14 F

    20x20

    (0,..,0) 2.9707E‐09 2.2483E‐10 4.6021E‐10 6.9335E‐16 6.9335E‐16(10,..,10) 2.7086E‐09 2.0588E‐10 4.3893E‐10 4.5602E‐15 F(100,..,100) 3.2268E‐09 8.2483E‐11 4.0955E‐10 8.0627E‐14 F(‐10,,..,‐10) 2.8833E‐09 2.0462E‐10 4.3618E‐10 3.8011E‐15 F(‐100,..,‐100) 3.3295E‐09 8.2440E‐11 4.0931E‐10 6.1506E‐14 F

    TOTAL 1.5119E‐08 8.0025E‐10 2.1542E‐09 1.5119E‐13 6.9335E‐16

    Based on Table 2 and Table 3, CG FR has the smallest errors compare to other methods.Additionally, PCGmethodhave similar error valueswithCGFR for initial points of (0,0). ForSOR method, the 1.25 possessed the smallest errors compared to other value ofextrapolationfactor.The numerical results of error for non‐symmetric positive definite (NSPD) matrix and non‐symmetricnegativedefinite(NSND)matrixareshowninTable4andTable5.

    Table4NumericalresultsoferrorforNSPD

    MATRIX INITIALPOINT SOR1 SOR2 SOR3 CG PCG

    2x2

    (0,0) 1.4936E‐09 9.8669E‐11 3.9007E‐10 F F(10,10) 1.4936E‐09 8.8377E‐11 4.3939E‐10 F F(100,100) 1.4936E‐09 4.1678E‐11 6.0994E‐10 F F(‐10,‐10) 1.4936E‐09 9.7252E‐11 4.4738E‐10 F F(‐100,‐100) 1.4936E‐09 4.2548E‐11 6.3104E‐10 F F

    3x3

    (0,0,0) 4.0746E‐09 5.1627E‐10 8.8059E‐10 F F(10,10,10) 3.8163E‐09 1.8833E‐10 4.6485E‐10 F F

    (100,100,100) 4.2922E‐09 6.0455E‐11 3.8182E‐10 F F(‐10,‐10,‐10) 3.7473E‐09 1.8494E‐10 5.0155E‐10 F F

    (‐100,‐100,‐100) 4.2758E‐09 5.9983E‐11 3.8450E‐10 F F

    5x5

    (0,..,0) 1.8234E‐09 3.6766E‐10 7.0221E‐10 F F(10,..,10) 2.0800E‐09 4.0141E‐10 8.4437E‐10 F F(100,..,100) 1.9109E‐09 5.0083E‐10 7.3925E‐10 F F(‐10,,..,‐10) 2.0602E‐09 4.1224E‐10 8.7374E‐10 F F(‐100,..,‐100) 1.9087E‐09 5.0216E‐10 7.4184E‐10 F F

    TOTAL 9.7831E‐09 2.1843E‐09 3.9014E‐09 0.00E+00 0.00E+00

  • Applied Mathematics and Computational Intelligence Volume 6, 2017 [83-98]

    93

    Table5NumericalresultsoferrorforNSND

    MATRIX INITIALPOINT SOR1 SOR2 SOR3 CG PCG

    2x2

    (0,0) 1.6596E‐09 2.0130E‐10 5.1517E‐10 F F(10,10) 1.6391E‐09 3.9167E‐11 5.5630E‐10 F F

    (100,100) 1.7700E‐09 1.3459E‐10 3.7940E‐10 F F(‐10,‐10) 1.4752E‐09 1.2988E‐10 4.4512E‐10 F F(‐100,‐100) 1.6836E‐09 1.2821E‐10 3.6070E‐10 F F

    3x3

    (0,0,0) 4.4369E‐09 5.7482E‐10 5.0125E‐10 F F(10,10,10) 3.7423E‐09 1.3617E‐10 7.9696E‐10 F F

    (100,100,100) 4.0291E‐09 8.8626E‐11 4.3437E‐10 F F(‐10,‐10,‐10) 4.2534E‐09 1.3327E‐10 4.2671E‐10 F F

    (‐100,‐100,‐100) 3.7787E‐09 8.5723E‐11 4.1570E‐10 F F

    5x5

    (0,..,0) 2.3168E‐08 4.4296E‐09 F F F(10,..,10) 2.3574E‐08 4.8688E‐09 F F F(100,..,100) 2.3723E‐08 4.4886E‐09 F F F(‐10,..,‐10) 2.3427E‐08 4.4553E‐09 F F F(‐100,..,‐100) 2.3015E‐08 4.7767E‐09 F F F

    TOTAL 1.1691E‐07 2.3019E‐08 0.0000E+00 0.0000E+00 0.0000E+00

    Referring to Table 4 and Table 5, CG FR and PCG method does not able to solve the givenproblems.AmongSORmethod,SOR: 0.5havethebiggesterrorcomparedtoothervaluesofextrapolationfactor.Incontrast,SOR: 1.25havethesmallesterrorforsomeoftheinitialpointsimilartoSOR: 1.9.10.2NumericalResultsBasedonIterationNumbersThis section provides summarization of the numerical results focusing on the comparison ofiterationnumbers.Theresultsforpositivedefiniteandnegativedefinitewithtolerancefactorof10 are combined together and comparedbased onnumber of iteration. The comparison isshownbyconsideringnumericalresultsoftolerancefactorof10 only.Thisisbecause10 isthebest stopping criteriawhich gives small numberof errors.Theperformanceprofile of allmethodsbasedonthenumberofiterationsforsymmetricmatrixisshowninFigure1.

    t

    0 50 100 150

    ps(t)

    0.0

    0.2

    0.4

    0.6

    0.8

    1.0

    SOR =0.5 SOR =1.25SOR =1.9CG FRPCG

    Figure1.Performanceprofileofiterationnumberforsymmetricmatrix

  • Nurhanani Abu Bakar and Mohd Rivaie / Solving large System of Linear Equation Using Successive…

    94

    ReferringtoFigure1,fromtherightside(t>150),itshownthatallmethodsexceptPCGmethodare capableon solving systemof linear equationby reaching 1.Furthermore, the leftside (t < 50) revealed that the bestmethod in termof number of iteration is CG FRmethod.Accordingtotherule,theperformanceprofileonthetopleftisthebestmethodcomparedtotheothers solver used. The worst method is SOR with 1.9, SOR with 0.5, SOR with

    1.25andPCGwhichdoesnotsuccessfullyconverge.ThisshowsthatCGFRmethodhasthebestperformanceinnumberofiterations.AmongSORmethod,extrapolationfactorof 1.25isthebestcomparedtoothervalue.Theperformanceprofileofallmethodsbasedon thenumberof iterations fornon‐symmetricmatrixisshowninFigure2.

    t

    2 4 6 8 10 12

    ps(t)

    0.0

    0.2

    0.4

    0.6

    0.8

    1.0

    SOR =0.5SOR =1.25SOR =1.9

    Figure2.Performanceprofileofiterationnumberfornon‐symmetricmatrix

    ReferringtoFigure2,fromtherightside(t>12)shownthatSORmethodswith 0.5andSORwith 1.25 are capable on solving system of linear equation by reaching 1.Oppositely,SORwith 1.9isunabletoachieve 1.Fornon‐symmetricmatrix,CGFRand PCG method failed to solve the system of linear equation. Furthermore, the left siderevealedthatthebestmethodintermofnumberofiterationisSORwith 1.25.Accordingtotherule,theperformanceprofileonthetopleftisthebestmethodcomparedtotheothersolver.Theworstmethod is SORwith 1.9 andSORwith 0.5,whileCGFRandPCG failed tosolveallthegivenproblems.ThisshowthatSOR: 1.25hasthebestperformanceinnumberofiterations.10.3NumericalResultsBasedonCPUtimesThissectionprovidessummarizationofthenumericalresultsfocusesonthecomparisonofCPUtime. The results for positive definite and negative definitewith tolerance factor of 10 arecombinedtogetherandcomparedbasedonCPUtimes.Thecomparisonisshownbyconsideringnumerical results of tolerance factor of 10 only. This is because, 10 is the best stoppingcriteriawhichgivesmallnumberoferrors.TheperformanceprofileofallmethodsbasedontheCPUtimesforsymmetricmatrixisshowninFigure3.

  • Applied Mathematics and Computational Intelligence Volume 6, 2017 [83-98]

    95

    t

    0 5 10 15 20 25

    ps(t)

    0.0

    0.2

    0.4

    0.6

    0.8

    1.0

    SOR =0.5SOR =1.25SOR =1.9CG FRPCG

    Figure3.PerformanceprofileofCPUtimesforsymmetricmatrix

    Referring to Figure 3, from the right side, it shown that allmethods except PCGmethod arecapableonsolvingsystemof linearequationbyreaching 1.Furthermore, the leftsiderevealedthatthebestmethodintermofCPUtimeisCGFRmethod.Accordingtotherule,theperformanceprofileonthetopleftisthebestmethodcomparedtotheotherssolverused.Intheopposite,theworstmethodisSOR: 1.9,SOR: 0.5,SOR: 1.25andPCGwhichdoesnotsuccessfullyconverge.ThisshowsthatCGFRmethodhasthebestperformanceinCPUtime.AmongtheSORmethod,extrapolationfactorof 1.25isthebestcomparedtoothervalue.The performance profile of allmethods based on the CPU times for non‐symmetricmatrix isshowninFigure4.

    t

    1 2 3 4 5

    ps(t)

    0.0

    0.2

    0.4

    0.6

    0.8

    1.0

    SOR =0.5SOR =1.25SOR =1.9

    Figure4.PerformanceprofileofCPUtimesfornon‐symmetricmatrix

    Referring toFigure4, fromtherightside, it shownthatSORmethodswith 0.5andSOR :

    1.25arecapableonsolvingsystemoflinearequationbyreaching 1.However,SOR: 1.9 is unable to achieve 1. For non‐symmetric matrix, CG FR and PCG methodfailed tosolve thesystemof linearequation.Furthermore, the left siderevealed that thebestmethod in term of number of iteration is SOR with 1.25. According to the rule, theperformanceprofileonthetopleftisthebestmethodcomparedtotheotherssolver.TheworstmethodisSORwith 1.9,SORwith 0.5,whileCGFRandPCGfailedtosolveallthegivenproblems.ThisshowthatSORwith 1.25hasthebestperformanceinnumberofiterations.

  • Nurhanani Abu Bakar and Mohd Rivaie / Solving large System of Linear Equation Using Successive…

    96

    11. DISCUSSIONFindingsfromthisstudyshowthattheCPUtimedependsonthenumberofiterations.Methodswith lower number of iterations take less CPU times to converge to the solution points.Therefore, CG FR is the best method that suites with both criteria for symmetric matrix. Incontrast, fornon‐symmetricmatrix,CGFRandPCGfailedforall initialpoints.Researcher likeKaasschieter(1988)statedthatPCGmethodcanbeusedforsolvingsystemoflinearequationofsymmetricmatrix only. Besides, Omolehin et al. (2008) stated that CGmethod relies on SPDmatrix. Some of the techniques for CGmethod frequently failed for non‐symmetricmatrix asmentioned by Saad (1988). Eisenstat et al. (1983) have identified a class of CG like Descentmethodas a suitablemethod for solvingnon‐symmetricmatrix.Moreover,Musschoot (1999)and Concus & Golub (1976) proposed Lanczos' method and Generalized Conjugate Gradient(GCG)respectivelyfornon‐symmetricsystems.Therearesomeothermethodsthatcanbeusedtosolvenon‐symmetricsystemsoflinearequations.Therefore,basedonthesupportingfactandthe numerical result, it is concluded that system of linear equation of non‐symmetricmatrixcannot be solved by using CG FR and PCG method. The SOR method with 1.25 is bestmethodwhichsuitewithbothcriteriafornon‐symmetricmatrix.12. CONCLUSIONSystemof linear equations is solvedusing three iterativemethodswhichareSOR,CGFRandPCG.ForSORmethod,theextrapolationfactorarevaries inorderto identifythebestvalueinfinding the solution points. The PCGmethod is also used since there are some cases wherematrix insystemoflinearequationhassmalleigenvalueswhichmightbeapproachingtozero.This system is called as ill‐conditioned and the condition can be determined based on thecondition number that can be calculated based on its eigenvalues. In this research, thecalculationbasedonPCGiscalculatedwithoutconsideringitsconditionnumber.Therefore,allmethodsareusedtosolvelargeandsmallsystemoflinearequation.In this research, the efficiency of CG FR, PCG, SORwith 0.5, SORwith 1.25and SORwith 1.9 are analyzed based on number of iterations and CPU time. From the previoussectiononresultanalysis,allmethodsexceptPCGmethodsucceedinsolvingsymmetricmatrixwhile the SORmethod only succeed in solving non‐symmetricmatrix. Based on performanceprofile(Figure1),theCGFRpossessthelowestnumberofiterationsforsymmetricmatrixwhilereferring toFigure2, SORmethodwith 1.25possess the lowestnumberof iterations fornon‐symmetricmatrix. Based on performance profile (refer Figure 3) the CG FR possess thelowest CPU time for symmetric matrix while (refer Figure 4) SOR method with 1.25possessthelowestCPUtimefornon‐symmetricmatrix.Inotherword,thelowerthenumberofiteration,thefastertheCPUtime.Thetolerancefactorinthisresearchconsistsofthreedifferentvaluewhichare10 ,10 and10 .Calculationfor10 asstoppingcriteriahavethesmallesterrorcomparetoothersvalue.This are varied in order to determine the accuracy of approximate points. Besides, errors ofeachmethodarecalculatedusingabsoluteerrorformula.TheCGFRmethodhavethesmallesterrorcomparetoothersmethodforsymmetricmatrixandSORmethodwith 1.25havethesmallesterrorcomparetoothersmethodfornon‐symmetricmatrix.In conclusion, CG FR is the bestmethod for solving systemwith symmetricmatrix and SORmethodwith 1.25isthebestmethodforsolvingsystemwithnon‐symmetricmatrix.Thiscanbereferredtothenumberofiteration,CPUtimeanderrorcalculation.

  • Applied Mathematics and Computational Intelligence Volume 6, 2017 [83-98]

    97

    13. FUTUREWORKIn this research, thePCGmethod is testedwithout considering its conditionnumber. The ill‐conditionedsystemcanbedeterminedbasedonthehighconditionnumber.Thus,thePCGcanbe used to overcome the ill‐conditioned system and improves the rate of convergence.Moreover, it is suggested to compare the matrix with higher dimension and applied in realworldmathematicalcases.14. ACKNOWLEDGEMENTSThiswork is reviewed and examinedby a lecturer fromMARAUniversity of Technology, SitiMuslihaNor‐Al‐Din.Authorswouldliketothanklecturersandapanel,NoorErniFazlianaMohdAkhirthatinvolvedinFinalYearProjectViva‐Voce2017.REFERENCES[1] S.Gharib,S.R.Ali,R.Khan,N.Munir,andM.Khanam,GlobalJournalofComputerScience

    andTechnology15,(2015).[2] S.J. Leon,Linearalgebrawithapplications, 8th ed. (Pearson Prentice Hall, Upper Saddle

    River,NewJersey,2010).[3] K.D. Shastri, R. Biswas, and P. Kumari, International Journal for Scientific Research &

    Development3,3287(2015).[4] G.Konghua,X.Hu,andL.Zhang,AppliedMathematicsandComputation187,1434(2007).[5] M.F. Kacamarga, B. Pardamean, and J. Baurley, Applied Mathematical Sciences8, 837

    (2014).[6] N.Jamil,InternationalJournalofEmergingSciences2,310(2012).[7] I.Kalambi,JournalofAppliedSciencesandEnvironmentalManagement12,53(2008).[8] M.J.Rodríguez‐Alvarez,F.Sánchez,A.Soriano,andA.Iborra,MathematicalandComputer

    Modelling52,1258(2010).[9] H.K.Liu,AppliedMathematicsandComputation217,5259(2011).[10] N.A.Atasoy,B.Sen,andB.Selcuk,ProcediaTechnology1,31(2012).[11] A.SmoktunowiczandA.Smoktunowicz,AppliedNumericalMathematics67,220(2013).[12] L.Fuyong,AppliedMathematicsandComputation232,1269(2014).[13] I.Dimov,S.Maire,andJ.M.Sellier,AppliedMathematicalModelling39,4494(2015).[14] C.C.Douglas,L.Lee,andM.C.Yeung,ProcediaComputerScience80,941(2016).[15] M.C.Yeung,C.C.Douglas,andL.Lee,JournalofComputationalScience20,177(2017).[16] S. Sickel, M.C. Yeung, and J. Held, inSummer Research Apprentice Program

    Symposium(UniversityofWyoming,Laramie,Wyoming,2005).[17] T.P. Collignon, “Efficient Iterative Solution of Large Linear Systems on Heterogeneous

    ComputingSystems,”Ph.D.thesis,DelftUniversityofTechnology,2011[18] Y. Saad,Iterativemethods for sparse linear systems, 2nd ed. (Society for Industrial and

    AppliedMathematics,Philadelphia,Pennsylvania,2003).[19] Y.SaadandH.A.VanDerVorst,JournalofComputationalandAppliedMathematics123,1

    (2000).[20] D.H. Li, Y.Y. Nie, J.P. Zeng, and Q.N. Li, Mathematical and Computer Modelling48, 918

    (2008).[21] R.L.BurdenandJ.D.Faires,Numericalanalysis(Cengagelearning,Boston,2011).[22] I.Arany,Computers&MathematicswithApplications38,125(1999).[23] S. Mittal, International Journal of High Performance Computing and Networking7, 292

    (2014).[24] S.Chen,InternationalJournalofNumericalAnalysisandModeling11,117(2014).[25] R. Barrett et al.,Templates for the solutionof linear systems:buildingblocks for iterative

  • Nurhanani Abu Bakar and Mohd Rivaie / Solving large System of Linear Equation Using Successive…

    98

    methods(Society for Industrial and Applied Mathematics, Philadelphia, Pennsylvania,1994).

    [26] D.A. Gismalla, International Journal of Engineering and Technical Research (IJETR)2, 7(2014).

    [27] M.EiermannandR.S.Varga,LinearAlgebraanditsApplications182,257(1993).[28] W.Y. Yang,W. Cao, T.S. Chung, and J. Morris,Applied numericalmethods usingMATLAB,

    1sted.(JohnWiley&Sons,Hoboken,NewJersey,2005).[29] J. Modersitzki (1994). Conjugate gradient typemethods for solving symmetric, indefinite

    linearsystems.Reportno.868,DepartmentofMathe,atics,UniversityofUrecht.[30] V.S.RyabenkiiandS.V.Tsynkov,Atheoreticalintroductiontonumericalanalysis(Chapman

    &Hall/CRC,BocaRaton,Florida,2006).[31] D.P.O'leary,Computing22,59(1979).[32] E.W.Weisstein,CRCconciseencyclopediaofmathematics, 2nded. (Chapman&Hall/CRC,

    BocaRaton,Florida,2002).[33] M.R.HestenesandE.Stiefel, JournalofResearchoftheNationalBureauofStandards49,

    409(1952).[34] Z.Z. Bai and Z.Q. Wang, Journal of Computational and Applied Mathematics187, 202

    (2006).[35] M.Benhamadou,AppliedMathematicsandComputation189,927(2007).[36] E.Kreyszig,AdvancedEngineeringMathematics,10thed.(JohnWiley&Sons,2010).[37] M.Benzi,JournalofComputationalPhysics182,418(2002).[38] J.B.Aparecido,N.Z.Souzaand J.B.Campos‐Silva, "ConjugateGradientMethod forSolving

    Large Sparse Linear Systems on Multi‐Core Processors," in 10th World Congress onComputationalMechanics(SãoPaulo,2014)

    [39] I.C.F. Ipsen,Numerical matrix analysis: linear systems and least squares(Society forIndustrialandAppliedMathematics,Philadelphia,2009).

    [40] N. Abu Bakar, "Solving Large System of Linear Equation Using Iterative Methods(Successive Over‐Relaxation, Conjugate Gradient and Preconditioned ConjugateGradient),"B.Sc.thesis,MARAUniversityofTechnology,2017.

    [41] E.F.Kaasschieter,JournalofComputationalandAppliedMathematics24,265(1988).[42] J.O.Omolehin,M.K.A.Abdulrahman,K.Rauf,andP.J.Udoh,AfricanJournalofMathematics

    andComputerScienceResearch1,28(2008).[43] Y.Saad,JournalofComputationalandAppliedMathematics24,89(1988).[44] S.C.Eisenstat,H.C.Elman,andM.H.Schultz,SIAMJournalonNumericalAnalysis20,345

    (1983).[45] C.Musschoot,JournalofComputationalandAppliedMathematics101,61(1999).[46] P.ConcusandG.H.Golub, inComputingMethods inAppliedSciencesandEngineering,1st

    ed.(Springer‐VerlagBerlinHeidelberg,1976),pp.56–65.APPENDIXIf any, theappendix shouldappeardirectlyafter the referenceswithoutnumbering, andon anewpage.

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile () /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    /CreateJDFFile false /Description > /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ > /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles false /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /DocumentCMYK /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ]>> setdistillerparams> setpagedevice