head	1.3;
access;
symbols
	charm6_1:1.3
	charm_6_0_1:1.3
	charm6_0_1:1.3
	charm6_0:1.3
	ChaNGa_1-0:1.3
	charm5_9:1.3
	charm_5-4-2:1.1
	charm_5-4-1:1.1;
locks; strict;
comment	@# @;


1.3
date	2003.01.22.20.34.53;	author olawlor;	state Exp;
branches;
next	1.2;

1.2
date	2002.09.09.17.53.38;	author olawlor;	state Exp;
branches;
next	1.1;

1.1
date	2000.09.14.19.42.33;	author milind;	state Exp;
branches;
next	;


desc
@@


1.3
log
@Major reorganization and cleaning--separated FEM-specific
code (fem_*) from rest of code, to show how to write a
serial-or-FEM program.  Also updated for new FEM framework
"mesh" interface.

Still needed:
  -Example problem with cohesive elements
  -NetFEM output (and/or a real output file)
@
text
@INTRODUCTION

This is a 2D crack propagation application built on top of the
FEM framework. It is a fully functional analysis application,
and includes input, partitioning, and processing.  There is no
output for now, however.


BUILD

In order to compile the crack2d program, you need to just type "make"
in this directory. This will make one program: pgm, the parallel
crack2d application.

There is also a serial version of the exact same program, which
can be made by typing "make serial".

The files in this directory include the input files:
    cohesive.inp: Main configuration file--timestep and materials
    crck_bar.inp: Mesh input file--lists nodes and elements

The common I/O and physics files:
    config.C: Read configuration file
    mesh.C: Read and set up nodes and elements 
    node.C: Physics for node timestepping
    lst_coh2.C: Physics for cohesive elements
    lst_NL.C: Physics for volumetric elements
    crack.h: Main header file

The FEM framework version files:
    fem_main.C: FEM version's main routine
    fem_mesh.C: Interfaces mesh with FEM framework

The serial version files:
    serial_main.C: Serial version of fem_main.C


RUN

# This will run the program pgm on 2 processors, by mapping 4 partitions
# onto these two processors. This program runs for 2000 iterations, and
# prints the time taken for each iteration at the end. Migration is
# performed after every 25 iterations. Note that the number of iterations can
# be changed in the cohesive.inp file.

./charmrun +p2 pgm +vp4

I have bigger data files available for this program. SO, if you are interested
in using those, you should contact me. (They are so big, I dont want to
check them in this directory.)


HISTORY

Originally written by Scott Breitenfeld (1999)
Converted to C and the FEM framework by members of PPL (1999)
Used to develop FEM framework by Milind Bhandarkar (2000)
Updated by Orion Lawlor (2003)
@


1.2
log
@Made TCharm and FEM routine names consistent with each other
and AMPI/Mblock: module name in caps, underscore, one initial
capital, then everything else underscore-separated lowercase:
	LIBRARY_Foo_bar
@
text
@d1 2
d4 4
a7 2
FEM framework.  This program is now OBSOLETE, as it uses an older
version of the FEM framework.
d9 1
d12 1
a12 3
in this directory. This wll make two programs: pgm and getmesh

getmesh is needed to prepare inputs for the program. Where as pgm is the
d15 2
a16 1
After compilation, run:
d18 3
a20 2
# this will extract the mesh description from the input file crck_bar.inp
# and will put it in crck_bar.mesh
d22 7
a28 1
./getmesh
d30 3
a32 2
# this will convert the mesh in crck_bar.mesh to a graph in crck_bar.graph
# needed to do partitioning with metis
d34 2
a35 1
../../../../bin/mesh2graph crck_bar.mesh crck_bar.graph
a36 2
# this will partition the graph in crck_bar.graph into 4 partitions
# and will create individual files meshdata.Pe* for each partition.
d38 1
a38 1
../../../../bin/gmap crck_bar.graph 4
d40 1
a40 1
# this will run the program pgm on 2 processors, by mapping 4 partitions
d46 1
a46 1
./conv-host +p2 pgm +vp4
d50 9
a58 1
check them in this directory.
@


1.1
log
@Added instructions for compiling and running the crack2d program.
@
text
@d2 2
a3 2
FEM framework. Note that one needs to use migratable threads of
converse in order to compile and run this application.
a4 3
So, make sure that you invoke super_install as

SUPER_INSTALL ampi {net-sol-cc|mpi-origin} -O -DCMK_THREADS_USE_ISOMALLOC=1
@

