IBM Release 1.93 manual Visualization System VIS5D, Cluster Control

Page 12

from a linear to a logarithmic scale, change a colour palette, or something like that; simple changes like that are easily made directly to the generated file. EMPOST’s calling syntax is

empost orders_file results_file

The second argument is the name of a file that EMPOST produces, containing numerical results of the named orders, e.g. integrals, fluxes, mode matching coefficients, and so on. These are in the form of assignment statements, and are parsed by POEMS when postprocessing is completed. POEMS then uses these to compute the value of the penalty function for the current iteration.

2.5. The Visualization System: VIS5D

VIS5D is an advanced visualization program originally written for meteorological data. It runs under the X Window System, which is native to Linux and other Unix derivatives, but which has to be added to Windows and OS/2. Windows users can install Hummingbird Exceed, which works well with VIS5D once all the arcane X parameters are set up. See Appendix B for a working sample X configuration.

The Vis5D conversion code in empost is based on a stand-alone program by Theodore G. van Kessel. In this release, it is fully integrated into EMPOST. Both animated and static Vis5D files are generated using the MOVIE3D statement.

2.6. Cluster Control

There are lots of ways to structure a cluster, lots of communications styles (such as the Message Passing Interface (MPI), and lots of cluster management systems such as the Sun Grid Engine (SGE). POEMS is not tied to any of these, but is easily adapted to them. The main script runs on a frontend machine. Inter-host communication requires no specific support other than a high capacity, low latency TCP/IP network. FIDO uses TCP/IP socket communication between fido subdomains running on different hosts, and local communication between subdomains running on the same host. To allow the user control, cluster control is not hardcoded into POEMS, but relies on an external script. The supplied script is fidossh, which uses a shared file system (e.g. NFS, XFS, or PVFS2) for communication and ssh for cluster node control, This design is suitable for clusters of up to perhaps 20 nodes, depending on filesystem performance. The high bandwidth host-to- host communication is organized in a distributed fashion between cluster hosts, so the frontend machine does not become a bottleneck in small and medium sized clusters. For larger clusters, FIDO can use a hierarchical supervision scheme, where a single frontend node is not forced to supervise hundreds or thousands of hosts, but the cluster script would have to be tailored for the application.

The downside of this flexibility is that the user has to apportion the work manually. Future versions of POEMS will help automate this, based on the CPU speed, number of cores, and amount of memory possessed by each host. Probably it will remain semiautomatic unless the subdomains can be made very small.

8

Image 12
Contents IBM T. J. Watson Research Center Yorktown Heights, NY Page IBM T. J. Watson Research Center Yorktown Heights, NY Using Poems HOW Poems WorksChapter Introduction MotivationPhilosophy Structure OptimizationPage Poems system organization Program Organization Front-End Script poems.cmdScript Operation Fdtd Engine FIDO/TEMPEST Postprocessor EmpostVisualization System VIS5D Cluster ControlParallel Processing Command Reference Poems Command-Line OptionsGlobal Group Freq LambdaFunction HostsWhich means that the host’s predefined hostname is not used MacdefMacro PrintRandomseed SETSimulator World Group TitleVerbose BoundaryMaterial Group BasicstepXrange Yrange DefineParameters epsReal epsImag muReal muImag Object Group BlockFAN Grating HollowboxTiledplane Curve 3DCURVECylinder Source Group Command Group Output GroupPostprocess Group FieldCAD WebpageFarfield FluxIntegral List ModematchMovie MOVIE3DDissipation SliceOptimize Group VariablesGuess Limit StorePenalty Merit Schedule Group ParametersRange Computational Domain SymmetryObjects Perfectly-Matched Layers MaterialsPlane Waves Page Beam Sources Optimization Merit FunctionsPhase uniformity across a plane Worked Example Optimizing a V Antenna 10 Optimized V antenna refractivePage Worked Example Doped Silica Waveguide Mode Worked Example Glass Ridge Waveguide to Free Space CouplerPredefined Constants Reserved Names ConfinePredefined Mathematical Functions Arithmetic OperatorsLogical Operators ABSAcos AcoshATAN2 CeilCOS ElintkIntegral 20. LNMAX MINRandom ROOT1DRound SignAnalytical Pupil Functions Material Parameter FunctionsFlattop Tempest and General Fdtd Information Startup and Steady StateTime step Page Appendix A. V-Antenna Optimization Run Poems Input DIPOLE2I.PAR END Material END World Subdomain ALL END ObjectEND Command END SourceEND Output END Optimize Phaseex END Postprocess AmplexPage Page Page END Tempest Input File DIPOLE2I.PAR.IN Written by Phil Hobbs Pages of pointsource statements omitted Postprocessor orders DIPOLE2I.ORDERS ALLDIPOLE2IEXI DIPOLE2IEXQDIPOLE2IEYI DIPOLE2IEYQDIPOLE2IEZI DIPOLE2IEZQMiddleflux POSTPROC.1.NAMEArray Amplex POSTPROC.1.PARMSTRINGDIPOLE2IPHASEEX ArrayPOSTPROC.2.PARMSTRING FF2DIPOLE2IPX POSTPROC.6.NAMEArray Poyntingz DIPOLE2IPZPOSTPROC.9.PARMSTRING POSTPROC.10.PARMSTRINGPOSTPROC.11.NAME Slice IndexnSlice Poyntingz POSTPROC.13.COMPARISONDOMAINDIPOLE2IPZXY0.BMP POSTPROC.14.NAMEPOSTPROC.16.COMPARISONDOMAIN DIPOLE2IPXZX0.BMPPOSTPROC.17.NAME Slice AmplexDIPOLE2IPHASEEXXY0.BMP DIPOLE2IPHASEEXZX0.BMPPOSTPROC.20.COMPARISONDOMAIN DIPOLE2IDISSZX0.BMPPOSTPROC.24.COMPARISONDOMAIN DIPOLE2IEXQZX0.BMPRun Results DIPOLE2I.SIMPLEX Page Page Page Page Page Fdtd and Tempest Tempest patchesAdvice common to all or most Fdtd programs Tempest limitationsWindow System Configuration Sample X11 ConfigurationRunning Vis5D Release NotesWish list Beta Release Limitations Page Page Index Emdenormal EmunderflowMatlab Maxordersources 81 Maxpointsources