Post by YazooPost by MarkI used to update it a few times during the season - fully automated and
taken straight from the F1 web page - but they've made the data _so_
horrible (and change it even mid-season) it's become semi-manual and
long-winded.
F1 webs were always horrible. They don't understand that if they make
the site user friendly it will help F1 to be more popular.
If they make API for data to be freely used by others, many will use
them and spread the word (open data concept).
Not only schedule, but all data: live timings, standings, other
statistics, everything.
It is one of the reasons I'm a bit slower with results the last couple
of years. For a decade, that was fully automated. A Python script would
parse the F1 website, compile the full set of results and inject them
into the SQL table. Another script would then build the results post.
The pages were never great (malformed HTML and XML), but a few filters
carefully nudged it into compliance for the full parse. All I had to do
was run them, check the results were right and hit "post". Literally -
even with a proper check - 60s of my time.
About 18 months ago, they changed it mid-season. Suddenly, it was *so*
badly malformed, I was writing a whole new custom parser (as I couldn't
use the short-cut of using a standard XML parser), but it not only
needed to be essentially tailored to their weird* layout, the nature of
the brokenness changed from weekend to weekend. That 1 minute job became
(often) an hour or two making automation pointless as it's easier to
manually record the results.
Finally, I gave up completely on automation for this season. It means I
have to find time to check results (and there have been a lot of late
appeals and amendments), and manually update the results file (I still
automate the updates). As a result, I mainly don't even try to publish
the same day.
All because web designers these days are *so* sloppy, they can't even
get basic validation of data right...and browser companies are complicit
as they have built acceptance of this brokenness into their browsers, so
that they can render even the most broken pages (mostly).
* I'm talking about a level of unmatched and mismatched tags that it
makes validation tools and parsers give up in frustration. Just a
simple example is the 96 violations (24 errors, 4 warnings, 68 info
advisories) that the W3C validator comes up with just on the front
results page:
https://validator.w3.org/nu/?doc=https%3A%2F%2Fwww.formula1.com%2Fen%2Fresults%2F2024%2Fraces