CSV to SACS seastate
At work, we’ve been looking to determine cyclic axial capacity of a fixed platform’s drilled and grouted foundations in calcareous soils from storm time-history sets. The approach is described in a paper titled Axial and lateral pile design in carbonate soils by C.T. Erbrich, et al. (2010). The team is looking to generate pile loads from this using Bentley’s SACS analysis suite. When totalled it adds to upwards of 30,000 discreet load cases.
With the archaic fixed-format, reminiscent of FORTRAN, SACS is unfriendly when it comes to developing user input files by hand, and in our case a seastate load input file. With 8,000 load cases, this requires generating about 127,000 unique lines of input, per history, error-free. Manually this is impractical.
I volunteered to find a way to automate this, if storm history were made available in comma separated value (CSV) files, which our Metocean team kindly did make. Last weekend, I rolled up sleeves and began coding in earnest in python, using pandas dataframe structure, to turn thousands of lines of Metocean data into hundreds of thousands of ungodly SACS input. I am glad that I now have a working script that does this in a couple of seconds. The approach to generating input file is done in two steps:
-
The Metocean data file structure (e.g.
TS001.000040TS.csv
) is as follows:H (m), T(s), ThetaP PlatformNth(Deg), WS (m/s), CS5(m/s), CS20(m/s), CS30(m/s), CS50(m/s), CS70(m/s), CS90(m/s), CS110(m/s), CS130(m/s), CS150(m/s), CS170(m/s), 4.48, 14.56,290.00, 13.84, 0.75, 0.68, 0.60, 0.50, 0.44, 0.41, 0.35, 0.30, 0.25, 0.15, 4.81, 14.67,290.00, 13.84, 0.75, 0.68, 0.60, 0.50, 0.44, 0.41, 0.35, 0.30, 0.25, 0.15, 4.40, 14.21,290.00, 13.84, 0.75, 0.68, 0.60, 0.50, 0.44, 0.41, 0.35, 0.30, 0.25, 0.15, 4.18, 12.34,290.00, 13.84, 0.75, 0.68, 0.60, 0.50, 0.44, 0.41, 0.35, 0.30, 0.25, 0.15, 2.83, 8.32,290.00, 13.84, 0.75, 0.68, 0.60, 0.50, 0.44, 0.41, 0.35, 0.30, 0.25, 0.15, 3.76, 14.89,290.00, 13.84, 0.75, 0.68, 0.60, 0.50, 0.44, 0.41, 0.35, 0.30, 0.25, 0.15, 6.07, 14.04,290.00, 13.84, 0.75, 0.68, 0.60, 0.50, 0.44, 0.41, 0.35, 0.30, 0.25, 0.15, ...
Read Metocean data file in CSV, and save it as a formatted CSV file using Pandas. This streamlines the CSV file:
python3 fdf.py -f <metocean data file>
-
Step 1 above generates a formatted Metocean data file, which is used to generate SACS seastate input file:
python3 slc.py -f <formatted metocean data file> > seastate1.inp
The script generates this seastate1.inp
file — listed for brevity:
# Reading FTS001.000040TS.csv file...done.
FILE B
LOADCN 1
LOADLB 1Envir for pile storm analysis
WAVE
WAVE0.95STOK 4.48 14.56 290.0 D -90.0 4.0 90MM10 1
CURR
CURR 1.18 0.15 290.0 BC NL AWP
CURR 21.18 0.25 290.0
CURR 41.18 0.3 290.0
CURR 61.18 0.35 290.0
CURR 81.18 0.41 290.0
CURR 101.18 0.44 290.0
CURR 121.18 0.5 290.0
CURR 141.18 0.6 290.0
CURR 151.18 0.68 290.0
CURR 166.18 0.75 290.0
LOADCN 2
LOADLB 2Envir for pile storm analysis
WAVE
WAVE0.95STOK 4.81 14.67 290.0 D -90.0 4.0 90MM10 1
CURR
CURR 1.18 0.15 290.0 BC NL AWP
CURR 21.18 0.25 290.0
...
...
LOADCN7931
LOADLB7931Envir for pile storm analysis
WAVE
WAVE0.95STOK 6.63 11.36 185.0 D -90.0 4.0 90MM10 1
CURR
CURR 1.18 0.17 185.0 BC NL AWP
CURR 21.18 0.28 185.0
CURR 41.18 0.33 185.0
CURR 61.18 0.39 185.0
CURR 81.18 0.45 185.0
CURR 101.18 0.48 185.0
CURR 121.18 0.55 185.0
CURR 141.18 0.67 185.0
CURR 151.18 0.75 185.0
CURR 166.18 0.83 185.0
It is important to have the Metocean data per CSV file to be less than or equal to 9999 load cases, since SACS has room for only as many (four character wide) in order to have numbered load conditions and labels without a counter-reset, or the need for extra code to add new counters, say, alpha-numeric types.
Requirements: Both the scripts listed below require python3 with pandas and docopt modules. The modules can be installed with the following at command line:
python3 -m pip install --upgrade pandas docopt
Script to format Metocean data
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""Format CSV file with Pandas
fdf.py 2022 ckunte
Usage: fdf.py (-f <file>)
fdf.py --help
fdf.py --version
Options:
-h, --help Show this help
-f --file Specify CSV input file to format (required)
"""
import pandas as pd
from docopt import docopt
def main(*args):
print("# Reading " + datfile + " file...", end="")
df = pd.read_csv("./" + datfile)
print("done.")
# remove wind speed column from data (by index -- this is a workaround:
# [should be [3], but somehow [2] works -- possibly a python 3.8.10 bug)
df2 = df.drop(df.columns[[2]], axis=1)
return df2.to_csv("F" + datfile)
if __name__ == "__main__":
args = docopt(
__doc__, version="Generate SACS storm load cards from a CSV file, v0.1"
)
datfile = "%s" % (args["<file>"])
main(datfile)
print("Formatted file:", "F" + datfile)
Here’s how fdf.py
script works:
- Reads the CSV file (given at command line) into dataframe (
df
) - Returns a CSV file from dataframe with and
F
prefixed to the filename
If one has many files that need formatting with the above script, then this can be done in one go with the following command:
for FILE in *.csv; do python3 ./fdf.py -f $FILE; done
Script to convert Metocean data (in CSV) into SACS seastate input
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""Generate SACS storm load cards from a CSV file
slc.py 2022 ckunte
Tested for python v3.8.10, v3.10.8 with pandas >= v1.5.1
Usage: slc.py (-f <file>)
slc.py --help
slc.py --version
Options:
-h, --help Show this help
-f --file Specify CSV input file (required)
"""
import pandas as pd
from docopt import docopt
def main(*args):
print("# Reading " + datfile + " file...", end="")
df = pd.read_csv("./" + datfile)
print("done.")
print("FILE B")
for i in range(len(df)):
# PRINTING WAVE INPUT LINES
print(f"LOADCN{i+1:4}")
print(f"LOADLB{i+1:4}Envir for pile storm analysis")
print(W[0])
print(
f"{W[0]:4}" # col 1-4, line label
+ f"{W[1]:4}" # col 5-8, kinematics fac.
+ f"{W[2]:4}" # col 9-12, wave type
+ f"{df.iat[i, 0]:>6}" # col 13-18, wave height
+ f"{F[0]:>6}" # col 19-24, SWL, skip (from LDOPT)
+ f"{df.iat[i, 1]:>6}" # col 25-30, wave period
+ f"{F[0]:>8}" # col 31-38, wave length, skip if period is given
+ f"{df.iat[i, 2]:>6}" # col 39-44, wave angle
+ f"{F[0]:>6}" # col 45-50, mud line elev., skip (from LDOPT)
+ f"{W[3]:>0}" # col 51, input mode
+ f"{W[4]:>7}" # col 52-58, crest position
+ f"{W[5]:>6}" # col 59-64, step size
+ f"{F[0]:1}" # col 65-66, steps for dyn. analysis, skip
+ f"{W[6]:1}" # col 67-68, static steps
+ f"{W[7]:1}" # col 69-70, critical position
+ f"{W[8]:1}" # col 71-72, member seg. (max)
+ f"{W[9]:1}" # col 73-74, member seg. (min)
# + "{0:0}".format(F[0]) # col 75, local accel. only, skip
# + "{0:0}".format(F[0]) # col 76, print opt, skip
# + "{0:<1}".format(F[0]) # col 77-78, order of stream func., skip
)
# PRINTING CURRENT INPUT LINES
print(C[0])
print(
f"{C[0]:4}" # col 1-4, line label
+ f"{F[0]:>4}" # col 5-8, min inline curr velocity, skip
+ f"{eam[9]:>8}" # col 9-16, elev above mud line
+ f"{df.iat[i, 12]:>8}" # col 17-24, curr velocity
+ f"{df.iat[i, 2]:>8}" # col 25-32, curr dir
+ f"{F[0]:>8}" # col 33-40, mudline elev override, skip
+ f"{F[0]:>8}" # col 41-48, blocking factor, skip
+ f"{F[0]:>8}" # col 49-56, elev, skip
+ f"{C[1]:1}" # col 57-58, elev, generate blocking fac.
+ f"{F[0]:>0}" # col 59, null
+ f"{C[2]:1}" # col 60-61, crest stretching opt.
+ f"{F[0]:>0}" # col 62, null
+ f"{F[0]:2}" # col 63-65, velocity units opt., skip
+ f"{F[0]:>0}" # col 66, null
+ f"{F[0]:2}" # col 67-69, elev percent opt., skip
+ f"{F[0]:>3}" # col 70, null (for now this is a workaround)
+ f"{C[3]:>2}" # col 71-73, AWP opt.
)
# adjust ranges depending upon the current profile
for n, m in zip(range(8, -1, -1), range(11, 2, -1)):
print(
f"{C[0]}" # col 1-4, line label
+ f"{F[0]:>4}" # col 5-8, min inline curr velocity, skip
+ f"{eam[n]:>8}" # col 9-16, elev above mud line
+ f"{df.iat[i, m]:>8}" # col 17-24, curr velocity
+ f"{df.iat[i, 2]:>8}" # col 25-32, curr dir
)
pass
if __name__ == "__main__":
args = docopt(
__doc__, version="Generate SACS storm load cards from a CSV file, v0.1"
)
datfile = "%s" % (args["<file>"])
#
# -- BEGIN USER INPUTS --
#
# WAVE DEFINITION AND POSITION PARAMETERS (SACS SEASTATE MANUAL, PG 170)
#
W = [
"WAVE", # line label
0.95, # kinematics factor
"STOK", # wave type
"D", # input mode (length (L), degree (D), or time (T))
-90.0, # crest position -- wave
4.00, # step size -- wave
" 90", # static steps -- wave
"MM", # critical position -- wave
"10", # member segmentation (max)
" 1", # member segmentation (min)
]
# CURRENT PARAMETERS (SACS SEASTATE MANUAL, PG 171)
#
C = [
"CURR", # line label
"BC", # option to generate blocking factor
"NL", # crest stretching option
"AWP", # apparent wave period option
]
# ELEVATION ABOVE MUDLINE (FOR CURRENT PROFILE)
#
eam = [
166.18,
151.18,
141.18,
121.18,
101.18,
81.18,
61.18,
41.18,
21.18,
1.18,
]
# FILLER FOR EMPTY (OR NULL) COLUMN BLOCKS
#
F = [" "]
#
# CSV DATA FILE FROM METOCEAN TO USE
#
# Headers in CSV file:
# H (m), T(s), ThetaP PltfNth(deg), WS (m/s), CS5(m/s), CS20(m/s), CS30(m/s), CS50(m/s), CS70(m/s), CS90(m/s), CS110(m/s), CS130(m/s), CS150(m/s), CS170(m/s)
# -- END USER INPUTS --
main(datfile, W, F, C)
For a bunch of formatted files, SACS seastate input files can be generated in one go like so:
for FILE in F*.csv; do python3 ./slc.py -f $FILE > $FILE.inp; done
This script is specific to the structure of the CSV file and the order in which data parameters occur. The first three columns represent wave data (height, period, and direction), and the last ten columns (to be aligned with eam
list) represent current speed at ten intervals from water surface to seabed in decreasing order. However, SACS requires this to be input in the increasing order, and therefore the ranges are reversed (aligned with eam
) and in negative increments to get appropriate column indices. Other than that, this script just re-prints data from the dataframe in a fixed format that SACS requires. Here’s how it works:
- Loads data from CSV file into a dataframe
- Prints a line
FILE B
for a stand-alone seastate file - Begins a loop for all lines in the CSV file
- Prints
WAVE
cards from wave data in the first 3 columns - Prints
CURR
cards (incl. multiline loop) from current data
There is of course an opportunity to make this script generic (e.g., by updating it to automatically count columns from either side and generate column index accordingly for further use) so that there is no need to re-factor the code — should the data structure change, but this code solved our immediate problem.