Got the skeleton of an interaction system now working. As it turns out, sets in Rhombus are not the same as sets in Racket --- if you try to pass a Rhombus set to in-set, then you'll get a runtime crash. Ouch. Oh well, more hacks I guess.
Had a major revelation on how to structure your programs in an Entity-Component-System design! The problem I was running into was in implementing a "context-sensitive" interact button --- when the player is near an interactable thing and they press the action button, then the game should respond somehow, but how can we do it in a generic and extensible way?
Solution? add an ActiveInteraction component, and implement context-sensitive responses by means of dedicated systems that handle the corresponding interaction. (Initially, I was thinking of stuffing a lambda, or some defunctionalised deferred message into the system, but this would have been hairy).
Some people just want to watch the world burn...
Implemented collisions; actually turned out to be pretty simple; had to modify some Rhombus macros while doing it --- they're not too hard to maintain it seems.
Damn, completely forgot the coronation was today. Not that I was going to watch it, nor was I particularly interested, but surprised that it completely slipped my mind.
No clue what's efficient in Racket/Rhombus or whatever, but got proper draw ordering implemented now! (I'm using a big growable array of pre-allocated structs, so even with the pointer chasing, I believe this should be reasonably fast). Overall, I'm really jiving with Rhombus! It really feels like the programming language of the future!
I refuse to try these proprietary SaaSS GPT services so I have no idea what I'm missing out on, but LLaMa at least can be run locally, and while the results aren't probably much better than a markov chain, it's fun to play around with. Unfortunately, being easily able to train models yourself is probably still quite far away.
From playing around with Llama for programming tasks, I'm guessing that sometimes typing the question and reading the GPT generated response might end up being faster than searching and then checking individual pages and trying to decipher what you want. Probably works best when you have a very specific question that relates to "folk" knowledge that people don't write about.
I'm probably shopping at the wrong places I guess --- after spending the afternoon searching through vivocity, $16 dollars was the lowest price that I could find.