# CSV to SACS input file

At work, we’ve been looking to determine cyclic axial capacity of a fixed platform’s drilled and grouted foundations in calcareous soils from storm time-history sets.1 The team is looking to generate pile loads from this using Bentley’s SACS analysis suite. When totalled it adds to upwards of 30,000 discreet load cases.

With the archaic fixed-format, reminiscent of FORTRAN, SACS is really unfriendly when it comes to developing user input files especially by hand, and in our case a seastate load input file. With 8,000 load cases, this requires generating about 127,000 unique lines of input, per history, error-free. Manually this is impracticable.

I volunteered to find a way to automate this, if storm history were made available in comma separated value (CSV) files, which our Metocean team kindly did make. Last weekend, I rolled up sleeves and began coding in earnest in python, using pandas dataframe structure, to turn thousands of lines of Metocean data into hundreds of thousands of ungodly SACS input. I am glad that I now have a working script that does this in a couple of seconds. The approach to generating input file is done in two steps:

1. The Metocean data file structure (e.g. TS001.000040TS.csv) is as follows:

H (m), T(s), ThetaP PlatformNth(Deg), WS (m/s), CS5(m/s),  CS20(m/s),  CS30(m/s),  CS50(m/s),    CS70(m/s),  CS90(m/s), CS110(m/s), CS130(m/s), CS150(m/s), CS170(m/s),
4.48,  14.56,290.00,  13.84,  0.75,   0.68,  0.60,   0.50,  0.44,   0.41,  0.35,   0.30,  0.25,   0.15,
4.81,  14.67,290.00,  13.84,  0.75,   0.68,  0.60,   0.50,  0.44,   0.41,  0.35,   0.30,  0.25,   0.15,
4.40,  14.21,290.00,  13.84,  0.75,   0.68,  0.60,   0.50,  0.44,   0.41,  0.35,   0.30,  0.25,   0.15,
4.18,  12.34,290.00,  13.84,  0.75,   0.68,  0.60,   0.50,  0.44,   0.41,  0.35,   0.30,  0.25,   0.15,
2.83,   8.32,290.00,  13.84,  0.75,   0.68,  0.60,   0.50,  0.44,   0.41,  0.35,   0.30,  0.25,   0.15,
3.76,  14.89,290.00,  13.84,  0.75,   0.68,  0.60,   0.50,  0.44,   0.41,  0.35,   0.30,  0.25,   0.15,
6.07,  14.04,290.00,  13.84,  0.75,   0.68,  0.60,   0.50,  0.44,   0.41,  0.35,   0.30,  0.25,   0.15,
...


Read Metocean data file in CSV, and save it as a formatted CSV file using Pandas. This streamlines the CSV file:

python3 fdf.py -f <metocean data file>

2. Step 1 above generates a formatted Metocean data file, which is used to generate SACS seastate input file:

python3 slc.py -f <formatted metocean data file> > seastate1.inp


The script generates this seastate1.inp file — listed for brevity:

# Reading FTS001.000040TS.csv file...done.
FILE B
LOADLB   1Envir for pile storm analysis
WAVE
WAVE0.95STOK  4.48       14.56         290.0      D  -90.0   4.0  90MM10 1
CURR
CURR        1.18    0.15   290.0                        BC NL         AWP
CURR       21.18    0.25   290.0
CURR       41.18     0.3   290.0
CURR       61.18    0.35   290.0
CURR       81.18    0.41   290.0
CURR      101.18    0.44   290.0
CURR      121.18     0.5   290.0
CURR      141.18     0.6   290.0
CURR      151.18    0.68   290.0
CURR      166.18    0.75   290.0
LOADLB   2Envir for pile storm analysis
WAVE
WAVE0.95STOK  4.81       14.67         290.0      D  -90.0   4.0  90MM10 1
CURR
CURR        1.18    0.15   290.0                        BC NL         AWP
CURR       21.18    0.25   290.0
CURR       41.18     0.3   290.0
...


It is important to have the Metocean data per CSV file to be less than or equal to 9999 load cases, since SACS has room for only as many (four character wide) in order to have a properly numbered load conditions and labels without a counter-reset, or the need for extra code to add new counters, say, alpha-numeric types.

Requirements: Both the scripts listed below require python3 with pandas and docopt modules. The modules can be installed with the following at command line:

python3 -m pip install --upgrade pandas docopt


## Script to format Metocean data

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

"""Format CSV file with Pandas
fdf.py 2022 ckunte

Usage: fdf.py (-f <file>)
fdf.py --help
fdf.py --version

Options:
-h, --help  Show this help
-f --file   Specify CSV input file to format (required)

"""
import pandas as pd
from docopt import docopt

def main(*args):
print("# Reading " + datfile + " file...", end="")
print("done.")
# remove wind speed column from data (by index -- this is a workaround:
# [should be [3], but somehow [2] works -- possibly a python 3.8.10 bug)
df2 = df.drop(df.columns[[2]], axis=1)
return df2.to_csv("F" + datfile)

if __name__ == "__main__":
args = docopt(
__doc__, version="Generate SACS storm load cards from a CSV file, v0.1"
)
datfile = "%s" % (args["<file>"])
main(datfile)
print("Formatted file:", "F" + datfile)


Here’s how fdf.py script works:

1. Reads the CSV file (given at command line) into dataframe (df)
2. Returns a CSV file from dataframe with and F prefixed to the filename

If one has many files that need formatting with the above script, then this can be done in one go with the following command:

for FILE in *.csv; do python3 ./fdf.py -f $FILE; done  ## Script to convert Metocean data (in CSV) into SACS seastate input #!/usr/bin/env python3 # -*- coding: utf-8 -*- """Generate SACS storm load cards from a CSV file slc.py 2022 ckunte Tested for python v3.8.10 and v3.10.8, numpy v1.23.4, and pandas v1.5.1 Usage: slc.py (-f <file>) slc.py --help slc.py --version Options: -h, --help Show this help -f --file Specify CSV input file (required) """ import pandas as pd from docopt import docopt def main(*args): print("# Reading " + datfile + " file...", end="") df = pd.read_csv("./" + datfile) print("done.") print("FILE B") for i in range(len(df)): # # PRINTING WAVE INPUT LINES # print("LOADCN" + "{0:>4}".format(i + 1)) print("LOADLB" + "{0:>4}".format(i + 1) + "Envir for pile storm analysis") print(W[0]) print( "{:4}".format(W[0]) # col 1-4, line label + "{:4}".format(W[1]) # col 5-8, kinematics factor + "{:4}".format(W[2]) # col 9-12, wave type + "{:>6}".format(df.iat[i, 0]) # col 13-18, wave height + "{:>6}".format(F[0]) # col 19-24, still water level, skip (from LDOPT) + "{:>6}".format(df.iat[i, 1]) # col 25-30, wave period + "{:>8}".format(F[0]) # col 31-38, wave length, skip if period is given + "{:>6}".format(df.iat[i, 2]) # col 39-44, wave angle + "{:>6}".format(F[0]) # col 45-50, mud line elevation, skip (from LDOPT) + "{:>0}".format(W[3]) # col 51, input mode + "{:>7}".format(W[4]) # col 52-58, crest position + "{:>6}".format(W[5]) # col 59-64, step size + "{:1}".format(F[0]) # col 65-66, steps for dyn analysis, skip + "{:1}".format(W[6]) # col 67-68, static steps + "{:1}".format(W[7]) # col 69-70, critical position + "{:1}".format(W[8]) # col 71-72, member segmentation (max) + "{:1}".format(W[9]) # col 73-74, member segmentation (min) # + "{0:0}".format(F[0]) # col 75, local acceleration only, skip # + "{0:0}".format(F[0]) # col 76, print option, skip # + "{0:<1}".format(F[0]) # col 77-78, order of stream func., skip ) # PRINTING CURRENT INPUT LINES # print(C[0]) print( "{0:4}".format(C[0]) # col 1-4, line label + "{:>4}".format(F[0]) # col 5-8, min inline curr velocity, skip + "{:>8}".format(eam[9]) # col 9-16, elev above mud line + "{:>8}".format(df.iat[i, 12]) # col 17-24, curr velocity + "{:>8}".format(df.iat[i, 2]) # col 25-32, curr dir + "{:>8}".format(F[0]) # col 33-40, mudline elev override, skip + "{:>8}".format(F[0]) # col 41-48, blocking factor, skip + "{:>8}".format(F[0]) # col 49-56, elev, skip + "{:1}".format(C[1]) # col 57-58, elev, generate blocking fac. + "{:>0}".format(F[0]) # col 59, null + "{:1}".format(C[2]) # col 60-61, crest stretching opt. + "{:>0}".format(F[0]) # col 62, null + "{:2}".format(F[0]) # col 63-65, velocity units opt., skip + "{:>0}".format(F[0]) # col 66, null + "{:2}".format(F[0]) # col 67-69, elev percent opt., skip + "{:>3}".format(F[0]) # col 70, null (for now this is a workaround) + "{:>2}".format(C[3]) # col 71-73, AWP opt. ) # adjust ranges depending upon the current profile for n, m in zip(range(8, -1, -1), range(11, 2, -1)): print( "{:4}".format(C[0]) # col 1-4, line label + "{:>4}".format(F[0]) # col 5-8, min inline curr velocity, skip + "{:>8}".format(eam[n]) # col 9-16, elev above mud line + "{:>8}".format(df.iat[i, m]) # col 17-24, curr velocity + "{:>8}".format(df.iat[i, 2]) # col 25-32, curr dir ) pass if __name__ == "__main__": args = docopt( __doc__, version="Generate SACS storm load cards from a CSV file, v0.1" ) datfile = "%s" % (args["<file>"]) # # -- BEGIN USER INPUTS -- # # WAVE DEFINITION AND POSITION PARAMETERS (SACS SEASTATE MANUAL, PG 170) # W = [ "WAVE", # line label 0.95, # kinematics factor "STOK", # wave type "D", # input mode (length (L), degree (D), or time (T)) -90.0, # crest position -- wave 4.00, # step size -- wave " 90", # static steps -- wave "MM", # critical position -- wave "10", # member segmentation (max) " 1", # member segmentation (min) ] # CURRENT PARAMETERS (SACS SEASTATE MANUAL, PG 171) # C = [ "CURR", # line label "BC", # option to generate blocking factor "NL", # crest stretching option "AWP", # apparent wave period option ] # ELEVATION ABOVE MUDLINE (FOR CURRENT PROFILE) # eam = [ 166.18, 151.18, 141.18, 121.18, 101.18, 81.18, 61.18, 41.18, 21.18, 1.18, ] # FILLER FOR EMPTY (OR NULL) COLUMN BLOCKS # F = [" "] # # CSV DATA FILE FROM METOCEAN TO USE # # Headers in CSV file: # H (m), T(s), ThetaP PlatformNth(Deg), WS (m/s), CS5(m/s), CS20(m/s), CS30(m/s), CS50(m/s), CS70(m/s), CS90(m/s), CS110(m/s), CS130(m/s), CS150(m/s), CS170(m/s) # -- END USER INPUTS -- main(datfile, W, F, C)  For a bunch of formatted files, SACS seastate input files can be generated in one go like so: for FILE in F*.csv; do python3 ./slc.py -f$FILE > \$FILE.inp; done


This script is very specific to the structure of the CSV file and the order in which data parameters occur. The first three columns represent wave data (height, period, and direction), and the last ten columns (to be aligned with eam list) represent current speed at ten intervals from water surface to seabed in decreasing order.2 However, SACS requires this to be input in the increasing order, and therefore the ranges are reversed (aligned with eam) and in negative increments to get appropriate column indices. Other than that, this script just re-prints data from the dataframe in a fixed format that SACS requires. Here’s how it works:

1. Loads the data from a CSV file, input at command line, into a dataframe
2. Prints a line FILE B for a standalone seastate file
3. Commences a loop for all lines in the CSV file, where each line is a load case
4. Prints WAVE cards from wave data in the first three columns
5. Prints CURR cards (incl. a multi-line local loop) from current data

There is of course that opportunity to make this script generic (e.g., by updating the script to automatically count columns from either side and generate column index accordingly for further use) so that there is no need to re-factor the code — should the data structure change, but this code solved our immediate problem.

1. The approach is described in a paper titled Axial and lateral pile design in carbonate soils by C.T. Erbrich, M.P. O’Neill, P. Clancy, M.F. Randolph, Axial and lateral pile design in carbonate soils, The University of Western Australia, 2010.

2. If current profile changes, then this script will require editing — specifically to hardcoded current-specific indices at (a) the fourth print line (# col 9-16), (b) the fifth print line (# col 17-24), (c) the range() parameters, and (d) the elevation above mudline user input list eam