if you could pick a standard format for a purpose what would it be and why?
e.g. flac for lossless audio because…
(yes you can add new categories)
summary:
- photos .jxl
- open domain image data .exr
- videos .av1
- lossless audio .flac
- lossy audio .opus
- subtitles srt/ass
- fonts .otf
- container mkv (doesnt contain .jxl)
- plain text utf-8 (many also say markup but disagree on the implementation)
- documents .odt
- archive files (this one is causing a bloodbath so i picked randomly) .tar.zst
- configuration files toml
- typesetting typst
- interchange format .ora
- models .gltf / .glb
- daw session files .dawproject
- otdr measurement results .xml
this is wrong. the first thing done before playing one of those files is running ithe audio through a low pass filter that removes any extra frequencies 192khz captures. because most speakers can’t play them, and in fact would distort the rest of the sound (due to badly recreating them, resulting in aliasing).
192khz has a place, and it’s called the recording studio. It’s only useful when handling intermediate products in mixing and mastering. Once that is done, only the audible portion is needed. The inaudible stuff can either be removed beforehand, saving storage space, or distributed (as 192khz files) and your player will remove them for you before playback