Biorap-75

download Biorap-75

of 12

Transcript of Biorap-75

  • 8/11/2019 Biorap-75

    1/12

  • 8/11/2019 Biorap-75

    2/12

    Biorap N75 Avril 2013 2

    Les projets accompagns sont lis des th-matiques diverses et concernent des entre-prises, des formations (initiales, continues,coles d't) ainsi que des laboratoires de re-cherche.

    Rpartition des demandes de ressources public/priv

    Le centre HPC@LR vise diffuser la culture duHPC y compris dans des domaines non histori-quement lis cet enjeu, pour faire notammentface au dluge de donnes ( Big Data ).Larchitecture matrielle a t tudie afindune part de permettre aux chercheurs detester diverses architectures matrielles, dedimensionner leurs besoins en amont, dtudierles incidences sur leur calcul dun changementmatriel, et dautre part de mutualiser leur in-vestissement matriel permettant daccder une architecture plus performante. Cette archi-

    tecture volue en fonction des besoins expri-ms par nos utilisateurs avec par exemplelarrive rcente de 2 nuds large mmoire (2nuds IBM X3850 X5 - 80 curs/nud, 1 ToRAM/nud).

    Chiffres Cls plus de 100 projets accompagns plus de 15 entreprises accompagnes et

    partenaires plus de 5 millions dheures de calcul plus de 350 comptes utilisateurs crs plus de 30 centres de recherche accompa-

    gns par le centre HPC@LR en Rgion LR plus de 420 k ! de projets collaboratifs et

    autres actions du centre plus de 200 personnes formes au HPC plus de 20,57 TFlops de puissance crte

    (double prcision)

    Un 100 e projet emblmatique du position-nement du centre HPC@LR et de la dyna-mique de la Rgion Languedoc-Roussillonsur les thmatiques de l'environnement etde l'eau.

    La clbration du 100 me projet accompagn par lecentre HPC@LR a consacrla socit BRLi.

    BRL Ingnierie (BRLi) est une socit d'ing-

    nierie spcialise dans les domaines lis leau, lenvironnement et lamnagement duterritoire. Sappuyant sur plus de 160 collabora-teurs, BRLI intervient en France et dans plus de80 pays, la demande des collectivits, desocits privs, des autorits locales et desgrands bailleurs de fonds internationaux.Les collaborations entre le centre HPC@LR etla socit BRLi sont multiples. Elles prennent laforme d'un partenariat au sein du projet FUILitto-CMS visant dvelopper une plateformelogicielle et des services innovants pour la pr-vision et la gestion des crises de submersionmarine. Elles prennent aussi la forme de pres-tations pour le compte de la socit qui ontpermis BRLi de se positionner sur des mar-chs fortement concurrentiels dans le domainede la modlisation hydraulique de crues sur desgrands cours deau tel que la Loire et ce grce la rduction des temps de calcul et l'exper-tise externe apporte par le centre [email protected] liens avec le centre HPC@LR et l'accom-pagnement (par les experts d'ASA et d'IBMnotamment pour l'installation des logiciels et laformation des utilisateurs) ont permis une mon-te en comptence des quipes de BRLi quigrent maintenant le lancement de leurs calculsde manire autonome sur le [email protected] centime projet est emblmatique desforces de la Rgion Languedoc-Roussillon dansles domaines lis l'environnement en gnralet l'eau en particulier : liens avec l'OSU-OREME, ple de comptitivit vocation mon-diale, centre d'excellence IBM.

    Contact : [email protected] du centre HPC@LRhttp://www.hpc-lr.univ-montp2.fr

    Nouvelles de G ENCI et de P RACE

    GENCI : campagne 2013 dattribution de res-sources sur moyens nationaux.La 1re session de la campagne 2013dattribution des ressources sur moyens natio-naux est close. Au total, 537 dossiers ont tdposs et valids : 108 nouveaux projets et429 renouvellements de projets. A lissue duprocessus dattribution, 506 projets ont obtenudes heures sur les ressources de G ENCI.

  • 8/11/2019 Biorap-75

    3/12

    Biorap N75 Avril 2013 3

    La seconde session de la campagne 2013 seraouverte du lundi 2 avril au vendredi 3 mai 2013,pour une attribution des heures au 1 er juillet2013.

    P RACE -3IP

    PRACE a publi sur son site les dtails de latroisime dimplmentation qui a dbut le 7 juillet 2012 pour une dure de 24 mois avec unbudget de 26,8 M ! .

    ww.prace-ri.eu/PRACE-Third-Implementation-Phase,88?lang=en

    P RACE : rsultats du Call6PRACE a retenu 57 projets sur les 88 projetssoumis dans le cadre de son 6me appel projets.

    http://www.prace-ri.eu/IMG/pdf/2013-02-28_call_6_allocations_final.pdf

    P RACE : cole de printempsElle aura lieu du 23 au 26 avril Umea (Sude).Le thme central : New and Emerging Tech-nologies - Programming for Accelerators .

    https://www.hpc2n.umu.se/prace2013/information

    P RACE : cole dtLcole dt de PRACE aura lieu du 17 au 21

    juin Ostrava (Rpublique Tchque).http://events.prace-

    ri.eu/conferenceDisplay.py?confId=140

    P RACE Digest

    Le numro 1/2013 de PRACE Digest estdisponible :

    http://www.prace-ri.eu/IMG/pdf/prace_digest_2013.pdf

    SysFera-DS: a seamless access toHPC resources for users and appli-cations. Implementation @EDF R&D

    During the last 20 years, distributed platforms

    e.g., workstations, clusters, supercomputers,grids, clouds) available to users have beenmore and more complex and diverse. As usersprimarily seek ease of use first and foremost,many of these platforms are not used at themaximum of their potential, and users oftencope with sub-optimal performance.Engineers or researchers should not need toknow the characteristics of the machines theyuse, be they static (e.g., performance, capacity)or dynamic (e.g., availability, load). They shouldlaunch their application and let the system takecare of how and where it runs to ensure theoptimal performance. For a system administra-tor, managing such a set of heterogeneous

    machines sometimes appears like a nightmare.Optimizing the use of all the available resources,at the lowest possible cost, remains a highlycomplex task. In order to ease this manage-ment a standard set of tools accessible from aunique interface helps significantly.

    Due to the complexity of the infrastructure andthe wide heterogeneity of users, applicationsand available tools, EDF R&D has to cope withdaily problems such as: "How do I connect tothis remote machine? "How do I run a job onthis specific batch scheduler? "How do I man-age my remote files? " What was my accountname and password to access to this particularresource? " But in an ideal world, scientistswould like to spend most of their time to theirmain field of expertise such as CAD modeling,fluid dynamics or structural mechanics simula-tions rather than consuming it in dealing withthe complexity of the infrastructure.To address this issue, EDF R&D and SysFera 1 have co-developed a solution named SysFera-DS that provides end-users with a unified andsimple view of the resources. It can be ac-cessed through Unix command line, or several

    APIs (C++, Java, Python), and from a higherlevel through a web portal (SysFera-DS Web-Board), which provides a graphical view of thefunctionalities. Transparent access to remotecomputational resources is made possible viathe main modules of SysFera-DSs middleware:Vishnu.Used conjointly, these modules provide easyaccess to remote computational resources. It isa non-intrusive (no particular rights needed onthe infrastructure for installing or using it) andnon-exclusive solution (it does not replace al-ready present solutions).Using SysFera-DS, applications can easily runon remote HPC resources taking benefit from astable and homogeneous interface whateverthe infrastructure software installed.In this article, we will present some uses of thisframework EDF R&D engineers are now testing.In particular, we focus on the interactions be-tween SysFera-DS and SALOME - an open-source software (L-GPL license), co-developedby EDF and the CEA, which provides a genericplatform for Pre- and Post-Processing and codecoupling for numerical simulation. We underlinethe ease of integration of SysFera-DS into thiswidely used simulation platform stressing theexpected benefits for SALOME in adopting thiscoupling strategy.

    1. EDF R&D infrastructure

    a. Ressources1 http://www.sysfera.com

  • 8/11/2019 Biorap-75

    4/12

    Biorap N75 Avril 2013 4

    In October 2012, the R&D Division from Elec-tricit de France (EDF) was hosting 5 resourcesdedicated to intensive computation: a 37680-core IBM BlueGene/P scheduled by LoadLev-eler, a 65536-core IBM BlueGene/Q and a17208-core x86 Intel Xeon X5670 Westmere

    cluster managed by SLURM, a 1984-core IntelXeon X7560 Nehalem-EX under Torque/Maui,and a 384 core Intel Xeon X7560 Nehalem-EXrun through LSF.This infrastructure changes on a regular basisto increase the HPC resources made availableto Scientists in computation power, storage ornetwork bandwidth.

    All these machines as well as the workstationused by EDF developers run a version of 64-bitLinux OS, have a dedicated filesystem, andimplement their own authentication. Coping withsuch a diversity of schedulers and finely man-aging the transfer of files can be time-consuming for the users especially when a newinfrastructure appears and they have to learnhow to address it.

    b. Users and applicationsThe need to address here is to develop anddeploy an access middleware on all the ma-chines of this infrastructure in order to providethe users with an environment that remainsuniform and constant in time. This kind of HPCbus will grant them access to a unique andperennial interface to manage jobs, to movefiles, to obtain information or to authenticatethemselves to the underlying machines.The targeted population covers experienceddevelopers as well as end-users that couldhave no knowledge of HPC environments at all.While virtualizing and simplifying the use toHPC resources for the latter, the proposed mid-dleware will also provide an easy access to thedetails of a running application. For example, adeveloper might want to check some intermedi-ate files produced during the execution, down-load a core file or examine live the memory

    footprint of the application. Applications running on the infrastructure be-long to one or more of these families: Parallel MPI applications that can use from

    tens to thousands of cores depending ontheir level of scalability.

    Bag of tasks applications, consisting ofrunning a given application either parallelor sequential while sweeping all thepossible combinations of a set ofparameters. Each run is independent of theother and their total number can reach

    several hundreds of thousands. Interactive applications for which the usermay want to enter remotely specific

    parameters during the execution, orvisualize immediately the computed results.

    In addition, some of these applications can belaunched, monitored and controlled via a dedi-cated platform running on the user workstation.In this article, we address the example of theSALOME platform.Some end-user would also require a simplifiedweb access to their application. The idea con-sists in launching; monitoring and controllingthe execution of a scientific application througha friendly web page accessible via a simplebrowser running on any Operating System.

    c. ConstraintsThe deployment of such a middleware on theinfrastructures of an industrial company such asEDF is not easy. Any proposed solution shouldpass through a long and complex process oftest and qualification to be proposed as an ad-ditional service available on the common infra-structure. In particular, the middleware devel-oped must take into account the following con-straints to ease its deployment and acceptance: It should not require any administrator

    privileges on the client workstation or on thefrontend of the cluster addressed. By usingonly regular user accounts, the testing anddeployment is kept simpler.

    It should be robust and scale to manage atleast a 1000 different users and at least 100

    simultaneous connections. It should not depend on local reconfiguration

    of any scheduler. Interfacing with them bysimply reading the results of the commandline might not be a good idea.

    It should be interfaced with all theauthentication systems available on theinfrastructure addressed. In particular, itshould optionally connect to the companysLDAP directories.

    It should provide an emergency shutdownoption available to the infrastructure

    administrator to immediately stop themiddleware in order to prevent any sideeffect it may create on any of the machine.

    As a distributed client-server architecture, itshould allow the cohabitation of several dif-ferent versions of the servers and client onthe same machines.

    2. SysFera-DS : a solution for transpa-rent access to HPC resourcesSysFera develops a solution named SysFera-DS that provides a seamless access to remote

    computational resources. It consists of a webfrontend: SysFera-DS WebBoard, and optional-ly a distributed backend called Vishnu. In this

  • 8/11/2019 Biorap-75

    5/12

    Biorap N75 Avril 2013 5

    section, we present the functionalities providedby SysFera-DS, emphasizing on Vishnu andgiving a quick introduction to the WebBoard.

    a. Vishnu: a distributed and scalable mid-dleware

    Since 2010, EDF R&D and SysFera have co-developed Vishnu, a distributed and open-source middleware that eases the access tocomputational resources. Its first target natural-ly was EDF internal HPC resources on which ithas been installed since September 2010.Vishnu is built around the following principles:stability, robustness, reliability, performanceand modularity. Vishnu is an open-source soft-ware, distributed under the CeCILL V2 License,and freely available on GitHub2.

    (i) FunctionalitiesFunctionally, Vishnu is composed of the follow-ing four modules: UMS (User Management Services): managesusers and daemons authentication and authori-zation for all the other modules. It provides SSO(Single Sign On) on the whole platform usingVishnus internal database or LDAP, along withssh. IMS (Information Management Services): pro-vides platform monitoring wherever a daemonis launched (processes state, CPU, RAM,batch schedulers queues " ). It can also startand stop remote Vishnu processes in case offailures, or if requested by the administrator. FMS (File Management Services): providesremote file management. It offers services simi-lar to posix ones (ls, cat, tail, head, mv, cp" ),along with synchronous and asynchronoustransfers (copy/move) between remote and/orlocal resources. TMS (Task Management Services) : submissionof generic scripts on any kind of batch sched-ulers (currently supported are: SLURM, LSF,LoadLeveler, Torque, SGE, PBS Pro), as wellas submission to machines that are not handled

    by a batch scheduler.The commands provided by these four modulesare divided into two categories: administratorand user commands. For a detailed list of theavailable functionalities, we encourage thereaders to refer to the documentation of Vishnu.Figure 1 presents an overview of the deploy-ment of SysFera-DS at EDF R&D. It is de-ployed on 4 of EDFs clusters or supercomput-ers.

    2 http://github.com/sysfera/vishnu

    Figure 1: SysFera-DS deployed at EDF R&D

    (ii) Functional architecture and designFigure 2 shows the different functional parts ofVISHNU and their interactions. On the clientside (upper part of the diagram) we show thedifferent interface components that are providedto the users: APIs and command-line interfaces.These interfaces provide an access to the dif-ferent VISHNU modules using a client compo-nent that is itself connected with the corre-sponding server component through the com-munication bus. The server components handledata items that belong to different inter-relateddatabases, each managing a specific kind of.The TMS and FMS Server components alsohandle some specific interaction with externalentities: the batch schedulers for the manage-ment of jobs on clusters or supercomputers,and the SCP and RSYNC protocols of the un-derlying Linux system for the transfer of filesbetween two storage servers or between a cli-ent system and a storage server.

    Java API Python API Command lineinterface

    Vishnu API (C++)

    UMS client IMS clientFMS clientTMS client

    UMS server IMS server FMS server TMS server

    ZMQ

    User andinfrastructure

    database

    Sessiondatabase

    Jobsdatabase

    File transfersdatabase

    Monitoringdatabase

    SCP/RSYNCBatchschedulers

    Communicationbus

    Vishnu interfacesoftware

    component

    Vishnu internalsoftware

    component

    Externalcomponents

    Clientside

    Server side

    Figure 2. Functional architecture

    From a deployment perspective, several con-figurations can be used. For the sake of simplic-ity, we always deploy a centralized database,which needs to be accessible by all Vishnuservers (also called SeD for Server Daemon).The simplest deployment requires a UMS andFMS SeD, and one TMS SeD per cluster (andoptionally an IMS SeD per machine where an-other SeD is deployed for monitoring). In this

  • 8/11/2019 Biorap-75

    6/12

    Biorap N75 Avril 2013 6

    configuration, the clients (the users) need toknow the address (URI and port) of all the SeDsto be able to address them. When only a fewSeDs are deployed, and when there are norestrictions on the ports accessible by the users,this can be a good solution.

    In more complicated infrastructures, with sever-al clusters, and with limitations on the ports theusers can have access to; the administratorscan deploy another daemon called the dis-patcher. Basically, the dispatcher acts as aproxy between the clients and the SeDs. Main-taining a directory of all SeDs and servicesavailable, it receives all clients requests andforwards them to the relevant SeDs. Throughthis dispatcher, a client only needs to know theaddress of the forwarder to be able to interactwith the system. This also greatly eases theaddition of a new cluster, as the users do notneed to modify their configuration files. Only thedispatcher needs to be aware of the modifica-tion of the underlying infrastructure. As the dis-patcher can be seen as a single point of failure,several of them can be deployed in the platformto share the load of routing messages, and toensure a higher availability. Note that if a dis-patcher were to fail, the users could still directlycontact the SeDs if they know how to contactthem (if they know the addresses of the SeDs).Using this approach, the well-designed interac-tion of its main components makes the deploy-ment of very flexible and easy to adapt to anyinfrastructure configuration.

    (iii) Implementation detailsVishnu has been developed in C++ following aset of MISRA rules, and has been audited by anexternal company for guarantying code reliabil-ity and maintainability. Vishnu developmentalso followed a Model Driven Developmentprocess; it uses emf4cpp for model design andsource code generation, and swig for Pythonand Java API generation.

    All communications within Vishnu are handled

    by MQ. MQ is an enhanced socket librarydesigned for high throughput. It is able to sendmessages across various transports like in-process, inter-process, TCP, and multicast, withvarious patterns low level communication pat-terns, but it is the programmers responsibility toimplement higher level patterns for their needs.Every element of Vishnu needs only one openport to communicate, which makes it easy tointegrate into an existing platform. Moreover, ifa dispatcher is deployed, the clients only needto know the address and port of the dispatcherto be able to interact with Vishnu: the dispatch-

    er manages all the messages within the infra-structure, and thus the client does not have toknow the address of all the Vishnu elements

    that are deployed. Note that in order to preventthe dispatcher becoming a bottleneck, severaldispatchers can be deployed, thus sharing theload of handling the clients messages.In order to prevent parsing issues and bugs,TMS does not rely on the batch schedulerscommand line interface, but instead it links tothe client libraries themselves. This has theadvantage that APIs and data types are welldefined, thus the command results do not de-pend on the local configuration or compilationoptions. The inconvenience of having these linkdependencies in the TMS SeD is alleviated bythe implementation of a plugin manager. Thusthe TMS SeD is not statically linked to theselibraries, but instead it dynamically loads thelibraries at startup depending on a configurationfile.

    (iv) Operating principlesWhen coping with heterogeneous platforms,most of the time, it is not possible to rely on asystem that would provide single-sign-on (SSO)on the whole platform; each clus-ter/supercomputer can be managed inde-pendently, with its own user management sys-tem, some use LDAP, some dont. Thus, usersmay have several logins and passwords, orseveral ssh keys to manage in order to connectto the infrastructure. Vishnu manages and hidesthis heterogeneity, under the heavy constraintthat Vishnu cannot have a privileged access toresources (no root or sudo account), but stillneeds to execute the commands with the cor-rect user account. These issues have beensolved in the following way: Users connect to Vishnu using a single log-

    in/password. These identifiers are checkedeither against Vishnus database, or one orseveral LDAPs directories. If the credentialsare correct, a Vishnu session is opened, andthe user can have access to the other com-mands to interact with the platform. Thesession key is checked every time a com-mand is issued.

    To allow Vishnu to execute commands withthe account, the user first defines their localaccounts on the platform (the usernamethey have on the different resources, andthe path to their home), and provides sshaccess to Vishnu to their local accounts us-ing ssh keys. Then, Vishnu can connect tothe user local account with its ssh key. Thus,identity is preserved, and no overlay for usermanagement needs to be installed: Vishnurelies on local policies, without modifyingthem.

    This use case is quite common: when intercon-necting several computational centers, each ofthem can have local policies regarding user

  • 8/11/2019 Biorap-75

    7/12

    Biorap N75 Avril 2013 7

    management, but ssh connections are quiteoften the only common access means.Here is a summary of this process: (1) the useropens a session, providing their credentials(those can be given via the CLI, or stored in a~/.netrc file); (2) the credentials are checkedagainst Vishnus database or a LDAPs directory(the checking order can be changed): if thecredentials are valid a session is created; (3)the user can interact with the system; (4) eachtime a command is issued, the global ID (theirlogin) of the user and the session key is for-warded to the relevant daemon; (5) the daemonconnects using ssh to the correct users localaccount and executes the command; (6) whenthe user has finished, they can close their ses-sion (otherwise it expires after a pre-definedperiod).Vishnu also provides, through TMS, means toabstract the underlying batch schedulers. Inorder to execute a script on a remote scheduler,the user just has to use the vishnu_submit command with its script. Apart from the variousoptions available through the different APIs tocontrol the submission process, the script canalso contain a set of variables understood byVishnu, and replaced by the correct optionscorresponding to those understood by the batchscheduler. Thus, a script can either containoptions specific to a given batch scheduler, orVishnu options which are independent. Hereare a few examples of Vishnu options:

    #%vishnu_working_dir (jobs remote workingdirectory), #% vishnu_wallclocklimit (theestimated time for the job execution), #%vishnu_nb_cpu (the number of cpus pernode), and many more to specify memory, mailnotification, queue" In addition to these variables meant to interactwith the batch scheduler, Vishnu also providesa set of variables to retrieve information on theenvironment in which the job is executed:VISHNU_BATCHJOB_ID (ID assigned to the jobby the batch system),VISHNU_BATCHJOB_NODEFILE (path to thefile containing the list of nodes assigned to the

    job), VISHNU_BATCHJOB_NUM_NODES (totalnumber of nodes allocated to the job),VISHNU_INPUT/OUTPUT_DIR ...Finally, the user can also define their own vari-ables meant to provide input data to the script.The system allows the transmission of stringsand files as parameters to the script. They areprovided through the APIs in the form ofkey=value couples. The key is the name ofthe variable in the script. If the value is a string,

    then it is provided to the script as is, and if it is afile, then the system first transfers the input files

    onto the remote cluster, and the variable con-tains the local path to the transmitted file.By default TMS provides a backend that is nottied to any batch scheduler, thus allowing theusers to submit jobs to machines that are notmanaged by a batch scheduler. This comes inhandy if you have spare desktop computersthat you want to use, or if you need to executesomething on the gateway of your cluster in-stead of submitting a job to the batch scheduler(e.g., compilation processes). In this case thesubmission process is exactly the same as witha batch scheduler, and the scripts can haveaccess to the same options and variables.

    b. SysFera-DS WebBoardThe WebBoard does not necessarily rely onVishnu to operate (see Figure 3). It can be de-ployed directly on top of a single cluster (in this

    case the WebBoard directly interacts with thebatch scheduler and the underlying filesystem);or if the infrastructure is more complex, it caninteract with Vishnu, in this case the WebBoardonly sends commands to Vishnu that handles

    jobs and files.

    Figure 3. The WebBoard can be deployed on top ofone or several clusters.

    The WebBoard has been developed with theGrails Framework, and also relies on the follow-ing technologies: Spring Security, Hiber-nate/GORM, RabbitMQ and jQuery.Basically, the WebBoard provides the same setof functionalities as Vishnu through a graphicalinterface: single-sign-on, file management, jobmanagement and monitoring. Apart fromproviding a graphical interface, and thus ab-stracting even more the usage of the infrastruc-ture, the WebBoard also provides higher levelfunctionalities, which are described in the fol-lowing sections.

    (i) Applications

    If Vishnu only handles jobs, the WebBoard isable to provide higher-level services on top of

  • 8/11/2019 Biorap-75

    8/12

  • 8/11/2019 Biorap-75

    9/12

  • 8/11/2019 Biorap-75

    10/12

    Biorap N75 Avril 2013 10

    NOUVELLES BREVES ! How to Benefit from AMD, Intel and N VI-DIA Accelerator Technologies in ScilabC APS a publi un article expliquant commentutiliser, de faon souple et portable, dans labibliothque Scilab, les technologiesdacclration de AMD, Intel et NVIDIA. Cecigrce la technologie OpenHPMM dvelop-pe par C APS. Larticle est tlcharger sur :

    http://www.caps-entreprise.com/wp-content/uploads/2013/03/How-to-Benefit-from-AMD-

    Intel-and-Nvidia-Accelerator-Technologies-in-Scilab.pdf

    ! Green Computing ReportLe groupe Tabor Communication, qui dite enparticulier HPCwire et HPC in the Cloud ,annonce un nouveau portail centr sur

    lefficacit nergtique et cologique descentres informatiques.http://www.greencomputingreport.com

    ! Vers une liste BigData Top 100 ?Le SDSC (San Diego Supercomputer Center de luniversit de Californie rflchit la cra-tion, avec laide de la communaut scientifique,dune liste des systmes les plus performantsdans le domaine du traitement des grands vo-lumes de donnes. Un benchmark spcifiqueserait mis en place. Informations sur cette initia-tive disponibles sur le site

    http://www.bigdatatop100.org/Un article publi dans la revue Big Data :http://online.liebertpub.com/doi/pdfplus/10.1089/big.2

    013.1509! Berkeley Lab prpare lExascaleLe NERSC (National Energy Research Scienti-fic Computing Center) Berkeley a commenclinstallation du systme Edison (ouNERSC-7), un Cray XC30 dune performancecrte de plus de 2 PFlops. Le centre prparedj la dfinition de la gnration suivante,NERSC-8, qui devrait tre installe fin 2015,dernire tape avant un systme exaflopique.! Le DoE amricain prpare lExascaleLes trois principaux centres du DoE amricain(Oak Ridge, Argonne et Lawrence Livermore)ont une approche globale de lvolution de leurssuperordinateurs. Un appel doffres (ou 3 ap-pels doffres, concerts) devrait tre lancavant la fin 2013 pour dployer des systmesde plus de 100 PFlops vers 2016-2017. Unmarch trs attractif pour les constructeursCray, IBM, voire SGI ! Et cest aussi la routevers lexascale "

    ! En Chine : 100 PFlops avant lExascale

    Selon HPCwire, la Chine prpare la construc-tion dun ordinateur de 100 PFlops qui devraittre oprationnel avant la fin de 2014. Il seraitbas sur des composants Intel : 100,000 CPUsXeon Ivy Bridge-EP associs 100,000 copro-cesseurs Xeon Phi.! Inde : inauguration de PARAM Yuva - II LInde revient dans le HPC avec linaugurationdun systme hybride, dune performance crtede plus de 500 TFlops, appel PARAM Yuva II, install luniversit de Pune.! UK : 45 M$ pour le Hartree Centre Le Hartree Centre, au Science and TechnologyFacilities Council (STFC) Daresbury, a tinaugur et a reu 45 M$ dont 28 M ! devraienttre consacrs la R&D sur les logiciels desti-ns aux grands dfis scientifiques et aux logi-ciels qui doivent permettre aux entreprises demieux utiliser le HPC. Ce centre est aussi de-venu le partenaire dUnilever dans le domainedu HPC.

    http://www.stfc.ac.uk/hartree/default.aspx! Un rapport de lENISA sur le Cloud L'agence de cyber- scurit europenne ENISAa publi un nouveau rapport qui examine leCloud Computing du point de vue de la protec-tion des infrastructures d'information critiques(PIIC), et souligne limportance croissante duCloud Computing compte tenu des utilisateurset des donnes ainsi que de son utilisationcroissante dans les secteurs critiques, commeles finances, la sant et l'assurance.http://www.enisa.europa.eu/activities/Resilience-and-

    CIIP/cloud-computing/critical-cloud-computing! Une application utilise plus dun millionde coeursLe CTR (Stanford Engineering's Center forTurbulence Research) a tabli un nouveaurecord en utilisant, pour une application com-plexe de mcanique des fluides, plus dun mil-lion de curs. Il a utilis la machine IBM BG/Q Sequoia du LLNL.! IDC HPC Award Recognition Program IDC lance son programme annuel qui permetdidentifier et de rcompenser les projets HPCles plus reprsentatifs dans le monde. Datelimite pour les soumissions initiales : 19 avril.https://www.hpcuserforum.com/innovationaward/appl

    icationform.html! Atipa Technologies

    Atipa Technologies (situ Lawrence, Kansas)va fournir un systme de 3,4 PFlops crte auDpartement Energy's Environmental Molecu-lar Sciences Laboratory (EMSL), du DoEamricain. Il associe 23,000 processeurs Intelet des acclrateurs Intel Phi (MIC).

  • 8/11/2019 Biorap-75

    11/12

    Biorap N75 Avril 2013 11

    ! Bull Mto-France a choisi de s'quiper auprs

    de Bull pour l'achat de supercalculateursdestins aux prvisions mtorologiques et la recherche sur le climat. Les modlesretenus, des Bullx B700 DLC, seront instal-

    ls Toulouse, partir du premier trimestre2013. La puissance totale crte serait de 5PFlops.

    Bull et lUniversit de technologie de Dres-den (Allemagne) ont sign un accord parlequel Bull va fournir un superordinateurdune puissance crte de plus de 1 PFlops.

    Le 22 mars 2013, Bull a lanc son CentredExcellence pour la Programmation Paral-lle, install Grenoble et qui dlivrera unhaut niveau dexpertise pour aider labora-toires et entreprises optimiser leurs appli-cations pour les nouvelles technologiesmanycore. Le Centre fournira un large por-tefeuille de services, incluant lanalyse, leconseil, la paralllisation et loptimisation decodes. Ce centre bnficiera aussi delexpertise de deux socits : Allinea etC APS.

    ! ClusterVision ClusterVision a install un cluster de 200TFlops luniversit de Paderborn (Allemagne).

    Avec 10.000 curs, le cluster comprend desprocesseurs Intel Xeon E5-2670 (16 curs), et

    des GPU NVIDIA Tesla K20.! Cray Cray a reu un contrat de 39 M$ du HLRN

    (North-German Supercomputing Alliance)pour installer deux systmes Cray XC30(anciennement surnomms Cascade )au Zuse Institute (Berlin) et luniversitLeibniz (Hannovre). Ces systmes serontexploits conjointement et fourniront unepuissance crte de plus de 1 PFlops.

    Cray va fournir deux superordinateurs CrayXC30 et deux systmes de stockage Cray

    Sonexion 1600 au service national mto-rologique allemand situ Offenbach. Lecontrat est valu 23 M$.

    ! DellTACC : une prsentation faite par le Dr KarlSchulz, directeur des applications scientifiquespour le TACC se focalise sur la mise en produc-tion du systme et les dfis relevs lors de saralisation. Disponible sur :http://www.hpcadvisorycouncil.com/events/2013/Switzerland-Workshop/Presentations/Day_1/7_TACC.pdf! HPHP a commenc linstallation dun systmedune performance d1 PFlops au NREL (Natio-

    nal Renewable Energy Laboratory) du Dpar-tement amricain de lnergie. Les serveurs HPutilisent des processeurs Xeon et co-processeurs Xeon Phi dIntel.

    ! IBM :

    LEPFL (Suisse) a acquis un BG/Q dune per-formance de 172 TFlops. Il fait partie des 10systmes les plus verts dans le monde.

    AGENDA

    9 au 11 avril EASC 2013 : Solving Software Chal-lenges for Exascale (Edinburgh, Royaume-Uni)15 au 16 avril - 5th PRACE Executive IndustrialSeminar (Stuttgart, Allemagne)22 au 24 avril EE-LSDS 2013 : Energy Efficiency inLarge Scale Distributed Systems (Vienne, Autriche)8 au 10 mai CLOSER 2013 : 3 rd International Con-ference on Cloud Computing and Services Science (Aachen, Allemagne)13 au 16 mai CCGRID 2013 : The 13 th IEEE/ACMInternational Symposium on Cluster, Cloud and GridComputing (Delft, Pays-Bas)13 au 16 mai Extreme Grid Workshop : ExtremeGreen & Energy Efficiency in Large Scale DistributedSystems (Delft, Pays-Bas)20 mai HiCOMB 2013 : 12 th IEEE InternationalWorkshop on High Performance Computational Bio-logy (Boston, MA, Etats-Unis)20 mai CASS 2013 : The 3rd Workshop on Com-munication Architecture for Scalable Systems (Bos-ton, MA, Etats-Unis)20 mai HPDIC 2013 : 2013 International Workshopon High Performance Data Intensive Computing (Boston, MA, Etats-Unis)20 mai HCW 2013 : Twenty second internationalHeterogeneity in Computing Workshop (Boston, MA,Etats-Unis)20 mai EduPar 2013 : Third NSF/TCPP Workshopon Parallel and Distributed Computing Education(Boston, MA, Etats-Unis)20 au 24 mai IPDPS 2013 : 27 th IEEE InternationalParallel & Distributed Processing Symposium (Bos-ton, MA, Etats-Unis)24 mai VIPES 2013 : 1st Workshop on VirtualPrototyping of Parallel and Embedded Systems (Bos-ton, MA, Etats-Unis)24 mai PCO 2013 : Third Workshop on ParallelComputing and Optimization (Boston, MA, Etats-Unis)27 au 30 mai ECMS 2013 : 27 th European Confe-rence on Modelling and Simulation (Aalesund Uni-versity College, Norvge)27 mai au 1er juin Cloud Computing 2013 : TheFourth International Conference on Cloud Compu-ting, GRIDs, and Virtualization (Valencia, Espagne)27 mai au 1er juin Future Computing 2013 : TheFifth International Conference on Future Computa-

  • 8/11/2019 Biorap-75

    12/12

    Biorap N75 Avril 2013 12

    tional Technologies and Applications (Valencia, Es-pagne)27 mai au 1er juin Computational Tools 2013 :The Fourth International Conference on Computatio-nal Logics, Algebras, Programming, Tools, and Ben-chmarking (Valencia, Espagne)

    27 au 1er

    juin Adaptive 2013 : The Fifth Internatio-nal Conference on Adaptive and Self-Adaptive Sys-tems and Applications (Valencia, Espagne)30 au 31 mai CAL 2013 : 7me Conference sur les

    Architectures Logicielles (Toulouse, France)5 au 7 juin ICCS 2013 : International Conferenceon Computational Science : Computation at the Fron-tiers of Science (Barcelone, Espagne)5 au 7 juin TPDACS 2013 : 13 th Workshop onTools for Program Development and Analysis inComputational Science (Barcelone, Espagne)5 au 7 juin ALCHEMY Workshop : Architecture,Languages, Compilation and Hardware support forEmerging ManYcore systems (Barcelone, Espagne)6 au 10 juin GECCO 2013 : Genetic and Evolutio-nary Computation Conference (Amsterdam, Pays-Bas)10 au 14 juin ICS 2013 : International Conferenceon Supercomputing (Eugene, OR, Etats-Unis)10 juin ROSS 2013 : International Workshop onRuntime and Operating Systems for Supercomputers(Eugene, OR, Etats-Unis)16 au 20 juin ISC 2013 : International Supercompu-ting Conference (Leipzig, Allemagne)17 au 18 juin VTDC 2013 : The 7th InternationalWorkshop on Virtualization Technologies in Distri-buted Computing (New-York, NY, Etats-Unis)17 au 21 juin HPDC 2013 : The 22nd International

    ACM Symposium on High Performance Parallel andDistributed Computing (New-York, NY, Etats-Unis)17 au 21 juin FTXS 2013 : 3rd InternationalWorkshop on Fault-Tolerance for HPC at ExtremeScale (New-York, NY, Etats-Unis)17 au 20 juin PROMASC 2013 : The SecondTrack on Provisioning and Management of ServiceOriented Architecture and Cloud Computing (Ham-mamet, Tunisie)20 au 23 juin CEC 2013 : Evolutionary algorithmsfor Cloud computing systems (Cancun, Mexique)

    24 au 27 juin AHS 2013 : 2013 NASA/ESA Confe-rence on Adaptive Hardware and Systems (Turin,Italie)25 au 26 juin Ter@Tec 2013 (Palaiseau, France)27 au 29 juin IGCC 2013 : The Fourth InternationalGreen Computing Conference (Arlington, VA, Etats-Unis)27 au 30 juin ISPDC 2013 : The 12th InternationalSymposium on Parallel and Distributed Computing( Bucarest, Roumanie)27 juin au 2 juillet CLOUD 2013 : The 6th IEEEInternational Conference on Cloud Computing (SantaClara, CA, Etats-Unis)

    27 juin au 2 juillet BigData 2013 : The 2013 Inter-national Congress on Big Data (Santa Clara, CA,Etats-Unis)

    1 au 2 juillet HLPP 2013 : International Symposiumon High-level Parallel Programming and Applications(Paris, France)1 au 5 juillet ECSA 2013 : 7th European Confe-rence on Software Architecture (Montpellier, France)1 au 5 juillet HPCS 2013 : The International Confe-

    rence on High Performance Computing & Simulation(Helsinki, Finlande)14 au 20 juillet ACACES 2013 : Ninth InternationalSummer School on Advanced Computer Architectureand Compilation for High-Performance and Embed-ded Systems (Fiuggi, Italie)16 au 18 juillet ISPA 2013 : The 11th IEEE Interna-tional Symposium on Parallel and Distributed Pro-cessing with Applications (Melbourne, Australie)22 au 25 juillet WorldComp 2013 : The 2013World Congress in Computer Science, ComputerEngineering, and Applied Computing (Las Vegas,NE, Etats-Unis)22 au 25 juillet PDPTA 2013 : The 2013 Internatio-nal Conference on Parallel and Distributed Proces-sing Techniques and Applications (Las Vegas, NE,Etats-Unis)22 au 25 juillet DMIN 2013 : The 2013 InternationalConference on Data Mining (Las Vegas, NE, Etats-Unis)26 au 30 aot EuroPar 2013 : Parallel and distri-buted computing (Aachen, Allemagne)28 au 29 aot Globe 2013 : 6th International Con-ference on Data Management in Cloud, Grid andP2P Systems (Prague, Rpublique Tchque)10 au 11 septembre BigData 2013 : Big DataSummit Europe (Sintra, Portugal)

    15 au 18 septembre EuroMPI 2013 (Madrid, Es-pagne)

    Les sites de ces manifestations sont accessibles surle serveur ORAP (rubrique Agenda).

    Si vous souhaitez communiquer des informa-tions sur vos activits dans le domaine du cal-cul de haute performance, contactez [email protected]

    Les numros de BI-ORAP sont disponibles enformat pdf sur le site Web dORAP.

    ORAPStructure de collaboration cre par

    le CEA, le CNRS et lINRIA

    Secrtariat : Chantal Le TonquzeINRIA, campus de Beaulieu, 35042 RennesTl : 02 99 84 75 33, fax : 02 99 84 74 99

    [email protected] http://www.irisa.fr/orap