Celeb Glow
updates | March 27, 2026

Convert text file with colon-separated values to html table

I have a big file with more than 10000 records formatted as below.

This needs to be converted to a html table

I tried various ways of converting into csv and then to html etc, but I have so far not been able to get the desired output

Data.txt

Name : john
age : 20
tag id : 1234567
Name : Mark
age : 40
tag id : 832245
Name : tom
age : 60
tag id : 789324
......

I want this to be converted into a "html" table like this:

Name Age Tagid
John 20 1234567
Mark 40 832245
tom 60 789324

I need to process files of 10000 records. How can I do that?

1

2 Answers

Works with gawk or nawk, but not mawk.

awk -F '[[:blank:]]*:[[:blank:]]*' ' BEGIN {print "<table><thead><tr><th>Name</th><th>Age</th><th>Tagid</th></tr></thead><tbody>"} { name = $2; getline age = $2; getline tagid = $2 print "<tr><td>" name "</td><td>" age "</td><td>" tagid "</td></tr>" } END {print "</tbody></table>"}
' Data.txt > Data.html

This assumes that there will be exactly 3 lines for each record, in the order name, age, tagid

2

This would be much shorter, if I knew how to do multiple search replaces in sed. i dont. Oh well. after that, it got silly. no awk required. im assuming your data file is named bs.dat and that you want a csv for migrating to a real data base system later. output to an awesome html file too... may need some css. This lousy output is html5 compliant. (as is)

#!/bin/bash
touch me lel.html
rm me lel.html
touch me p1 p2 p3 p4 lel.html
#Fix BS data make a proper csv
c=","
#remove spaces
cat bs.dat | sed 's/ //g' > p1
#remove Name:
cat p1 | sed 's/Name://g' > p2
#Remove age:
cat p2 | sed 's/age://g' > p3
#remove tagid:
cat p3 | sed 's/tagid://g' > p4
#make a csv
file=p4
i=1
while read line do if [ "$i" = "1" ]; then l1=$line$c && i=2 elif [ "$i" = "2" ]; then l2=$l1$line$c && i=3 elif [ "$i" = "3" ]; then l3=$l2$line >> me && i=1 && echo $l3 >> me else echo "something went wrong: $line" exit fi done <"$file"
rm p1 p2 p3 p4
#Cool now we have a proper csv for later when we need to migrate to a real database
#ok lets make some html
touch lel.html
echo "<!DOCTYPE html><html><head><meta http-equiv=\"content-type\" content=\"text/html; charset=UTF-8\">" > lel.html
echo "<meta content=\"code, bash, lolz\" name=\"keywords\" /><title>IDK what</title></head><body>" >> lel.html
echo "<pre>Name Age ID " >> lel.html
while IFS=, read col1 col2 col3
do echo "$col1 $col2 $col3" >> lel.html
done < me
echo "</pre></body></html>" >> lel.html
firefox lel.html

Given that you have a large data file, you may opt to remove the p1-p4 files earlier. the csv output will be a smaller file, as will all the consecutive outputs, but it is disk intensive, I made Zero effort for efficiency & resource usage conservation.

Also:The Names will be Pushed together. Wait, I don't see FirstName LastName in this data. I assume they actually exist but were omitted for simplicity...There is a simple fix to that, using REGEX. where there is [a-z][A-Z] in the first column value, insert a space

1

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy