Compare commits

...

3 Commits

Author SHA1 Message Date
325f72e0e0 fix: add Welcome section and fix homepage snippet layout
Add the hardcoded Welcome card matching the live elmstatic site.
Put all snippets into the 2-column grid instead of making the
first one full-width. Sort snippets by filename to ensure correct
display order.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 22:27:17 -05:00
e6c8c77805 refactor: use ssg package and sync content from elmstatic
Replace ~200 lines of inline SSG logic (types, accessors, parsing,
sorting, tags, file I/O) with imports from the new ssg package.
Sync updated Lyceum article, images, snippet, and CSS fixes
(h3/h4 font-bold, ol list-decimal, blockquote/li spacing) from
blu-elmstatic.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 22:04:17 -05:00
67a838307f docs: mark issues 7, 8, 14 as fixed
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 21:11:44 -05:00
36 changed files with 250 additions and 453 deletions

View File

@@ -123,45 +123,29 @@ List.map(items, addOne)
## Issue 7: Effectful callbacks in `List.map`/`forEach`/`fold`
**Category**: Type checker limitation
**Category**: Type checker limitation**Fixed**
**Severity**: Medium
The type checker requires pure callbacks for higher-order List functions, but effectful callbacks are often needed (e.g., reading files in a map operation).
**Reproduction**:
```lux
fn readFile(path: String): String with {File} = File.read(path)
let contents = List.map(paths, fn(p: String): String => readFile(p));
// ERROR: Effect mismatch: expected {File}, got {}
```
**Workaround**: Use manual recursion instead of List.map/forEach:
```lux
fn mapRead(paths: List<String>): List<String> with {File} =
match List.head(paths) {
None => [],
Some(p) => {
let content = readFile(p);
match List.tail(paths) {
Some(rest) => List.concat([content], mapRead(rest)),
None => [content]
}
}
}
```
**Fix**: Added effect propagation in `src/typechecker.rs`: callback arguments with effect annotations now propagate their effects to the enclosing function's inferred effect set, in `infer_call`, `infer_effect_op` (module access path), and `infer_effect_op` (effect op path).
---
## Issue 8: `String.indexOf` and `String.lastIndexOf` missing from type checker
## Issue 8: `String.indexOf` and `String.lastIndexOf` broken in C backend
**Category**: Type checker gap**Fixed**
**Category**: C backend bug**Fixed**
**Severity**: Medium
These functions exist in the interpreter but were not registered in the type checker, causing "Module 'String' has no member" errors.
Type registrations were added previously, but C compilation of code using `String.indexOf`/`lastIndexOf` failed with type errors. Three root causes:
1. Global `let` bindings always declared as `static LuxInt` regardless of value type
2. `Option<Int>` inner type not tracked through function parameters, causing match extraction to default to `LuxString`
3. `indexOf`/`lastIndexOf` stored ints as `(void*)(intptr_t)` but extraction expected boxed pointers (inconsistent with `parseInt`)
**Fix**: Added type registrations in src/types.rs:
- `String.indexOf(String, String) -> Option<Int>`
- `String.lastIndexOf(String, String) -> Option<Int>`
**Fix**: Fixed in `src/codegen/c_backend.rs`:
- `emit_global_let` now infers type from value expression
- Added `var_option_inner_types` map; function params with `Option<T>` annotations are tracked
- `indexOf`/`lastIndexOf` now use `lux_box_int` consistently; extraction dereferences via `*(LuxInt*)`
---
@@ -243,12 +227,10 @@ let json = match Json.parse(raw) { Ok(j) => j, Err(_) => ... };
## Issue 14: No `File.copy`
**Category**: Missing feature
**Category**: Missing feature**Fixed**
**Severity**: Low
Must shell out to copy files/directories.
**Workaround**: `Process.exec("cp -r static/* _site/")`.
**Fix**: Added `File.copy(source, dest)` to types.rs, interpreter.rs (using `std::fs::copy`), and C backend (fread/fwrite buffer copy).
---
@@ -273,12 +255,12 @@ Must manually scan directories and filter by extension.
| 4 | Tuple field access `.0` `.1` | High | Open |
| 5 | Multi-line function arguments | High | Open |
| 6 | Multi-line lambdas in calls | High | Open |
| 7 | Effectful callbacks in List HOFs | Medium | Open |
| 8 | String.indexOf/lastIndexOf types | Medium | **Fixed** |
| 7 | Effectful callbacks in List HOFs | Medium | **Fixed** |
| 8 | String.indexOf/lastIndexOf C backend | Medium | **Fixed** |
| 9 | No List.sort | Medium | Open |
| 10 | No HashMap/Map | Medium | Open |
| 11 | No regex | Medium | Open |
| 12 | No multiline strings | Medium | Open |
| 13 | Json.parse return type docs | Low | Open |
| 14 | No File.copy | Low | Open |
| 14 | No File.copy | Low | **Fixed** |
| 15 | No file globbing | Low | Open |

View File

@@ -1,228 +0,0 @@
# Introducing Lyceum: The Richest Interface for Reading Ancient Greek Texts
<https://lyceum.quest>
I built a tool with the goal of making Ancient Greek authors more accessible in their own words. I started trying to learn Ancient Greek a few months ago with the goal of reading the Greek pantheon that shaped history like works by Homer, Plato, Aristotle, the New Testament, Marcus Aurelius' Meditations, etc. without the meaning being filtered through a translation. I started by using the available online resources and books such as _Athenaze_ and _Reading Greek_ (and still am!), but became bogged down by trying to memorize endless nuanced rules of grammar and decipher stories made up by the authors, which made the learning experience boring and tedious, and full glossaries often weren't given (so many of the morphological forms required inference). I don't care about some random made up Athenian named Δικαιοπολις and the silly hijinks of his small family farm, I want to read the _Iliad!_. And more importantly, despite after knowing more about Greek grammar than English after a few chapters, I still couldn't read a word of it! I became skeptical of this method, and began looking for ways to just learn it by _reading what I wanted to read_. There are a couple readers online which have the Ancient Greek texts in the original language, the best of which seems to be [Scaife Reader](https://scaife.perseus.org), but even that is buggy, difficult to use, and was missing features I wanted. So my goal was to combine the freely available datasets for lemmas, morphologies, and original texts and (some) of their translations into one unified reader that had all of the features I wanted for my own personal study. This way, I can hopefully learn Greek closer to the way the man who discovered Troy, [Heinrich Schliemann](https://en.wikipedia.org/wiki/Heinrich_Schliemann), learned it:
> "In order to acquire quickly the Greek vocabulary," Schliemann writes, "I procured a modern Greek translation of _Paul at Virginie_, and read it through, comparing every word with its equivalent in the French original. When I had finished this task I knew at least one half the Greek words the book contained; and after repeating the operation I knew them all, or nearly so, without having lost a single minute by being obliged to use a dictionary....Of the Greek grammar I learned only the declensions and the verbs, and never lost my precious time in studying its rules; for as I saw that boys, after being troubled and tormented for eight years and more in school with the tedious rules of grammar, can nevertheless none of them write a letter in ancient Greek without making hundreds of atrocious blunders, I thought the method pursued by the schoolmasters must be altogether wrong....I learned ancient Greek as I would have learned a living language."
## Current Features
So far, the reader includes:
- A browseable catalog of 373 ancient authors and 1,837 works in Ancient Greek
![Browseable catalog of ancient Greek texts](/images/articles/introducing-lyceum/browse.png)
- Multi-language translations when available from the data
![Multi-language translations side by side](/images/articles/introducing-lyceum/translations.png)
- Word-level definitions and morphology lookups from Perseus and LSJ. Click on any word to see its definition and other info!
![Word popup with definition and morphology](/images/articles/introducing-lyceum/popup.png)
- Anki-style vocab card import from texts. Learn the words of the texts you are trying to read as you read them with spaced repetition!
![Spaced repetition flashcard](/images/articles/introducing-lyceum/srs-card-1.png)
- Dashboard to track progress
![Progress dashboard](/images/articles/introducing-lyceum/dashboard.png)
- (For select texts): Word-level interlinear contextual AI definitions (show what each word means in its context)
- (For select texts): Word-level AI generated Latin transliteration (for pronunciation clarity)
(side-note on the use of AI: I know that it is discouraged in the rules on this subreddit, but please read my case for it in the [full article]()).
And there are many features and cleanup items on my TODO list, like:
- Add missing translations for popular texts (e.g. Meditations)
- Add more transliterations and contextual glosses (e.g. interlinear)
- Make texts downloadable in nice readable formats. A fun thing might even be to allow PDF generation in whatever display format options you've selected for offline reading.
- Fix any missing or incorrect dictionary definitions, with a user-driven feedback mechanism.
- Audio for pronunciation
- A mobile app
I would love any feedback on this. Try it and let me know what you think!
With side-by-side English (and other language) translations when available in the data. There is a sidebar which allows for viewing selected grammatical information for each word as you read, which includes color-coding for parts of speech (e.g. noun/article/verb etc.), color coding of inflected form endings, and grammar badges above each word. In the Greek originals, you can click on any word to see a popup of its definition from the [Perseus Digital Library]() or the [Liddel, Scott Jones Ancient Greek Lexicon (LSJ)](https://en.wikipedia.org/wiki/A_Greek%E2%80%93English_Lexicon). Since many Greek words have many potential definitions in different contexts, the Pro version provides Contextual AI generated meanings for select texts (with many more in the works) so that you can see the definition of a Greek word _in the context it was written in_, instead of having to guess which of the definitions is being used in that particular instance.
There is a reader view to show just the text (or translations if you like beside it) for distraction-free reading.
![Reader view for distraction-free reading](/images/articles/introducing-lyceum/reader-view.png)
Finally, one of my personal favorite features is the Anki-like spaced-repetition cards, where you can add cards as you read them (or by section, or in bulk) to your deck for review.
## Why Build This?
### Love of History
Living in Boston, my fascination with history has grown immensely over the past few years. The natural starting point was American Revolutionary history, and the epic adventures of founders Jefferson (Jon Meacham), Washington (Ron Chernow), Franklin (Walter Isaacson), Hamilton (Chernow), and especially John Adams (David McCullough, the _John Adams_ TV series) ignited a passion that resulted in _their_ historical heroes becoming mine also. Cicero and Rome became my fascinations as they were Adams'. And recently, Greece became my passion as it was Cicero's. And this was cemented with Will Durant's excellent books _Caesar and Christ_ and _The Life of Greece_.
### For Most of History, Greek was An Essential Aspect of Education
One interesting thing to note was
### How Best to Learn?
I attempted several methods, and in each I felt something was lacking. At first, I researched on Grok and Reddit and discovered books that came highly recommended and bought them: _Athenaze_ and _Reading Greek_. After being "troubled and tormented with the tedious rules of grammar" as Schliemann wrote, I had an understanding of the foundational grammar and some of the most common words. But with full-time work commitments, family responsibilities, and other hobbies, this was unsustainable and a painful way to learn, and after all this tedium I discovered that I knew a lot about made-up Athenian farmer Δικαιοπολις and couldn't read _any_ _Iliad_. I became demotivated and burnt-out.
I did some searching, and found the book _Complete Ancient Greek_ by Gavin Betts and Alan Henry, which agreed with my sentiment that Greek should be learned with the material the student _wants to read_ top of mind. Even so, the vocabularies given were often incomplete (a surprisingly common problem in these Greek textbooks!) which made deciphering even the basic exercises unnecessarily painful, and the choice phrases from works like _Odyssey_ were modified and simplified, which still gave me a sense of inauthenticity (although I felt much closer), and an unnecessary roundabout-ness to the learning process. So the search to learn by getting closer to the source continued.
I figured I'd start with the beginning of Western literature, the _Iliad_, and I searched for an interlinear/Hamiltonian translation (where each word is translated according to its contextual definition). I thought I'd be able to do this for any of the major texts I wanted to read, but these sorts of translations are surprisingly sparse! I found one on [archive.org](https://archive.org/details/iliadhomerwitha00clargoog/page/n12/mode/2up), and began making an [Anki]() deck for each word and memorizing it. There were still a couple problems. Although this was probably good enough to learn from, it was still difficult for me to completely understand what was happening, and I wanted to pull up a couple other translations side-by-side for comparison. This way I could simultaneously compare multiple editions of natural English to Hamilton's interlinear version to get a comprehensive sense of the meaning where it wasn't clear from the interlinear. Also, adding Anki cards manually for each word became a very time-consuming process.
Here the idea for the website began to take shape. What if I could combine all the features I was looking for (interlinear texts, side-by-side translations, and the ability to import words into an Anki-like system for spaced-repetition word learning from whatever text you were reading) into one web interface?
### Finding Texts in the Original Greek, in the Format You Want, is Very Difficult
If you go on eBay or Amazon and search for the Odyssey in Ancient Greek, you'll probably be presented with the Loeb Library edition. As far as I can tell these are the best in-class
One of the key features I needed for this was an interlinear translation. Going on eBay and finding your book of choice in _only Greek_ is very difficult. Finding it in
### Alternative Solutions Are Insufficient
The best viewer I've found online for reading Ancient Greek texts is the [Scaife](https://scaife.perseus.org) viewer from Perseus, and it has many problems. It is old and outdated, and the web deserves something better. You can start by taking a look at its [Google Lighthouse](https://developer.chrome.com/docs/lighthouse) score:
![Scaife Lighthouse score](/images/articles/introducing-lyceum/scaife-lighthouse-score.png)
### There is No Substitute for the Original
When I started looking into ancient Rome and Greece, my desire to forego translations kept rising. More and more, I began to realize that translations are often not what they say they are. They are often more like movies based on books -- which is to say that they deviate frequently and often significantly from the source material, and even in translations which attempt to be faithful the author's voice inevitably intrudes.
I actually learned this most plainly while building this website. While building the contextual glosses for Aesop's Fables, I noticed a word I recognized in Book 1, _ἀλώπεκος_, which means "fox", didn't appear at all in the famous [Townsend](https://en.wikipedia.org/wiki/George_Fyler_Townsend) translation, which is considered a standard translation. Here is the original Greek:
> Λέαινα ὀνειδιζομένη ὑπὸ ἀλώπεκος ἐπὶ τῷ διὰ παντὸς ἕνα τίκτειν·" Ἕνα, ἔφη, ἀλλὰ λέοντα." Ὅτι τὸ καλὸν οὐκ ἐν πλήθει δεῖ μετρεῖν, ἀλλὰ πρὸς ἀρετὴν ἀφορᾶν.
Here is the [Townsend translation](https://demo.lyceum.quest/read/tlg0096.tlg002.perry-grc1?ref=257&trans=tlg0096.tlg002.perry-eng1):
> The Lioness
>
> A CONTROVERSY prevailed among the beasts of the field as to which of the animals deserved the most credit for producing the greatest number of whelps at a birth. They rushed clamorously into the presence of the Lioness and demanded of her the settlement of the dispute. 'And you,' they said, 'how many sons have you at a birth?' The Lioness laughed at them, and said: 'Why! I have only one; but that one is altogether a thoroughbred Lion.' The value is in the worth, not in the number.
You can see at a glance that there are a lot more words in the Townsend translation than the original Greek, which arose my suspicion, which was further aroused by looking at the contextual meanings I had generated for it:
![Contextual meaning comparison](/images/articles/introducing-lyceum/read.png)
The AI-generated contextual meaning claims that this is the genitive form of "fox", i.e. "of fox". Is it wrong? Well, the nice thing about our tool is that we can immediately cross-reference with the LSJ dictionary, or, better yet for our purposes, Perseus for the various morphological forms of a word. We can click the word to open up the popup, as below, to compare:
![Popup showing morphological validation](/images/articles/introducing-lyceum/popup.png)
And we see that this does indeed mean "fox". So there is an entire character in the original Greek fable that is missing in a widely-respected translation! Cross referencing multiple AI responses and combining that with a by-hand interlinear-ization using Perseus data, we can see that the original fable translates to something more like:
> The lioness was being reproached by the fox because she always gave birth to only one.
> “One,” she replied, “but a lion.”
> For the noble is not to be measured by quantity, but one should have regard for virtue.
This problem pervades all translations. This sentiment is better expressed in the introduction to the book _Complete Ancient Greek_ by Gavin Betts and Alan Henry than I could give myself:
> A modern translation of an ancient classic such as Homers Iliad often puzzles readers with
> the difference between the works overall conception and the flatness of the English. The
> works true merit may flicker dimly through the translations mundane prose or clumsy verse
> but any subtlety is missing. Instead of a literary masterpiece we are often left with a
> hotchpotch of banal words and awkward expressions. Take this version of the first lines of the
> Iliad: _The Wrath of Achilles is my theme, that fatal wrath which, in fulfilment of the will of
> Zeus, brought the Achaeans so much suffering and sent the gallant souls of many
> noblemen to Hades, leaving their bodies as carrion for the dogs and passing birds. Let us
> begin, goddess of song, with the angry parting that took place between Agamemnon King of
> Men and the great Achilles son of Peleus. Which of the gods was it that made them quarrel?_
> (translated E.V. Rieu, Penguin Books 1950) Can this really represent the work of a poet who
> has been universally admired for millennia? Or is it a TV announcer introducing a guest
> singer, whom he flatters with the trite phrase goddess of song?
> Compare the eighteenth-century translation of Alexander Pope:
> _Achilles wrath, to Greece
> the direful spring
> Of woes unnumberd, heavenly goddess, sing!
> That wrath which hurld to Plutos gloomy reign
> The souls of mighty chiefs untimely slain;
> Whose limbs unburied on the naked shore,
> Devouring dogs and hungry vultures tore:
> Since great Achilles and Atrides strove,
> Such was the sovereign doom, and such the will of Jove!
> Declare, O Muse! in what ill-fated hour
> Sprung the fierce strife, from what offended power?_
> Here we have genuine poetry. Only when the translator himself is a real poet can the result
> give some idea of the original but even then its true spirit is lost and, as here, the translators
> own style and personality inevitably intrudes. There is no substitute for getting back to the
> authors actual words. To understand and appreciate the masterpieces of ancient Greek
> literature we must go back to the original Greek.
There is no substitute for the original. The only solution is to learn the languages.
### Building with Claude: AI is Surprisingly Good at This
With all the noise being made about how great AI is lately, I wondered if I could use it to create something that satisfies my requirements. Could I use it to speed up the process of collecting and organizing various data sources for word definitions, morphologies (the various forms of each word _lemma_ and how that changes its meaning), source and translation texts? And since interlinear versions of most texts do not exist (and even less in a machine-readable format), could I use an LLM to create a pipeline to generate them? Could I get it anywhere near the quality of Hamilton's system?
The questions, in essence, became:
1. Could I create a system that achieves parity with the best available tools for making original Greek sources and their translations accessible?
2. Could I improve upon them with new features and ways to engage with the texts?
3. Could AI reliably create interlinear versions of the texts with enough accuracy to be more helpful than harmful?
I believe the free features already surpass the top competing sites (Perseus, Scaife, etc.) and combine their best features into a much more accessible view. The Pro features add even more helpful features of Anki-like spaced repetition, Hamiltonian-style interlinears with contextual meanings, and progress charts. This is the
The r/AncientGreek subreddit rules state the following:
"Machine translators and AI are not reliable.
ChatGPT, Google Translate, and the like will confidently give you wrong answers about translations and Latin grammar. And if you only have a beginner's proficiency in Ancient Greek, there will be enough correct information to trick you. Generally, posts about machine translators and AI will be removed."
I sympathize with this sentiment, especially about AI's tendency to hallucinate. But I think it is a) a partially outdated opinion (AI tools have gotten better and better under the right supervisor), and b) it underestimates the value AI can bring from doing "grunt-work" which can later be validated.
Interestingly, there is a relatively recent [discussion]() on the same subreddit where several users talk about how AI has come in handy for their personal study of Ancient Greek. I wonder if these rules had been set in the earlier days of AI, when the false positive rate was high enough to render it more harmful than helpful. But I think two points need to be made here about AI:
1. Humans can (and do) make mistakes or simply make things up to suit their retelling (as in Townsend's translations above -- or really any of the mainstream Aesop translations that I've found).
2. The possibility of errors do not inherently mean something is more of a hindrance to learning than helpful. Rather, the question is is the error rate high enough that it is doing more harm than good.
3. Since learning Ancient Greek is a far more niche activity than it used to be, quality resources for learning it are becoming rarer and rarer based on the earlier discussion of the Townsend translations, we can see it's often the case that the current state
4. AI can do the initial grunt work no or few humans are willing to do, and thus make the problem one of _validation_ (which can be done either by careful reviews of experts)
To be clear, AI _does_ hallucinate, and it does so frequently, but many of the current models are quite good at doing proper interpretations,
### Data Sources
#### Texts
- [Perseus Digital Library](https://github.com/PerseusDL/canonical-greekLit) Greek texts and English translations from canonical-greekLit repository. 50+ authors, 631 works, from Homer through the Church Fathers.
- [Diorisis Ancient Greek Corpus](https://figshare.com/articles/dataset/The_Diorisis_Ancient_Greek_Corpus/6187256): 820 pre-lemmatized XML texts with word-level morphology, POS tagging, and sentence boundaries. Used to aid in generating contextual meanings.
- [First1K Greek Project](https://github.com/OpenGreekAndLatin/First1KGreek): Additional Greek texts from the Open Greek and Latin project at the University of Leipzig.
- [Chambry Aesopica](http://www.mythfolklore.net/aesopica/): Chambry's critical edition of Aesop's Fables with Perry numbering, multiple recensions, and scholarly apparatus.
- [Townsend Aesop Translation](https://www.gutenberg.org/ebooks/21): George Fyler Townsend's 1867 English translation of Aesop's Fables, used for parallel reading.
#### Morphology & Glosses
- [Perseus Ancient Greek Dependency Treebank](https://github.com/PerseusDL/treebank_data) Syntactic annotations (dependency trees) and glosses for select texts including Aesop, Homer, and Attic prose. Used for word-level alignment with translations.
- [Diogenes](https://github.com/pjheslin/diogenes): Morphological analyses for 400K+ Greek word forms. Each entry maps an inflected form to its lemma, part of speech, and full grammatical analysis (case, number, gender, tense, voice, mood, person).
#### Definitions
- [LSJ](https://lsj.gr/): 116,000+ dictionary entries from the definitive Ancient Greek lexicon (9th edition, 1940). Full scholarly definitions with citations, etymology, and usage notes. The "gold standard" comprehensive Greek definitions.
#### Claude AI
- **Content**: AI-generated contextual glosses that analyze word meaning within specific passages. Used to disambiguate words with multiple meanings (e.g., "λόγος" as "word" vs "argument" vs "reason" depending on context).
### Building the Tool I Wish Existed
Ultimately I'm building this because I want it to exist. I now use the spaced-repetition feature every day and am learning the words from the _Iliad_ and Aesop's _Fables_ using it. I will continue to add features I (and others) think are useful, I'm sure there are plenty of mistakes to correct and improvements to be made that I'm missing, please join our Discord or send an email to <support@lyceum.quest>.
### Gratitude for Previous Work
Ross Scaife, Herculaneum project, Loeb Library, Athenaze, Latinium
History dies under two conditions, I think:
1. A failure to _preserve_ the present. This is the job of the people who were alive during the events.
2. The failure to _breathe new life_ into it, to unearth the bones and tombs and scrolls of the past. That is the job of posterity.
We can't do anything about (1), but I believe it is our duty to pursue (2). The majority of the debates we have about X or Y social or political or philosophical issues are not new -- they were, for the most part, already debated in Ancient Greece, and with a few exceptions, it's debatable to what extent we improved on the groundwork they laid for any given topic.
Emerson said history must be rediscovered

View File

@@ -0,0 +1,174 @@
---
title: "Introducing Lyceum: A Modern Interface for Reading Ancient Greek Texts"
description: Lyceum is a modern interface for reading Ancient Greek works in their original language, with interlinear translations, morphology, and spaced-repetition vocabulary learning.
date: 2026-02-23
tags: greek language-learning software
---
> _I learned ancient Greek as I would have learned a living language._
>
>
> Heinrich Schliemann, the man who rediscovered the lost city of Troy, quoted from _The Story of Civilization Vol. II: The Life of Greece_
Months ago, after having visited Rome and delving deeper and deeper into ancient history, I decided I couldn't tolerate translations for the Ancient authors. I felt compelled to read them in their own words. And this meant that I had to learn to read Greek and Latin. I started on that journey with Ancient Greek. I am still a beginner, but I felt strongly that a tool which met my learning needs was missing. So I used Claude to build a website to bridge the gap: [https://lyceum.quest](https://lyceum.quest)
I began with a top recommendation online: _Athenaze_. It has given me a great foundation, particularly for grammar, but I quickly became frustrated by a few things:
1. I was learning Greek through a made-up story about an Athenian farmer. It was engaging, but it wasn't "real" Ancient Greek and it wasn't a story they wrote, which quickly became demotivating when things got difficult. This isn't to fault the book, which is great, but I wanted something that put me in direct contact with the texts to make me feel like I was "unlocking" some ancient secret by learning the language.
2. So much of language learning is memorization, and we have a scientifically proven solution to the long-term memorization problem: [spaced-repetition](https://gwern.net/spaced-repetition), which I wasn't using, and which is very difficult to implement without the help of software. The strategy I was using of just haphazardly writing down words over and over was inefficient and exhausting.
3. Flipping through glossaries/dictionaries is difficult, and every language-learning book will force you to do this by the nature of that format. Each lesson introduced new words, but if I forgot a previous word somewhere I'd have to find which chapter it was in or flip to the dictionary. Since I didn't know _most_ words at the start of each chapter because my memorization technique was bad, I found myself spending a lot of time flipping pages. This was further complicated by the fact that each word is _inflected_, which meant the glossary definitions just had to pick one specific form of that word in the definition. This is confusing to a new student as the dictionary definition often just looks like a completely different word. Eventually, I ended up just typing the words I forgot into an LLM to have it tell me what it was, which was far faster. Double-checking with the glossary I found that the LLM was almost always correct.
4. The answer key to the book wasn't built in, which seems common practice for Latin/Greek books, likely because they are geared for universities. But for a self-learner, I had the constant sense that I was veering off course with no way to self-correct except to validate my answers via an LLM and online dictionaries. This was also incredibly frustrating.
There are some books that tried to address concern 1, such as _Complete Ancient Greek_ by Gavin Betts and Alan Henry, but they still have to simplify the original for learning purposes and usually do not seem to address 2-4.
I began to search more seriously for ways to just make direct contact with the authors I wanted to read, with some sort of translation to help me engage with the original. After a bit of searching, it seems that the most serious player in this game is the [Loeb Library](https://www.loebclassics.com/), which produces beautiful little books which have the Greek (or Latin) original on one page alongside a quality translation on the other. As far as I can tell, this is one of the only publishers which even _publishes_ works with original Greek, translation or no translation. I enjoyed these and tried reading side by side, but this too was difficult and fraught with error. The word orders, and sometimes even sentence orders, of the translations are different. While this was a step in the right direction as an aid to learning and made the process much more rewarding, it still felt incomplete to me.
Eventually I discovered the interlinear and/or [Hamiltonian](https://en.wikipedia.org/wiki/James_Hamilton_(language_teacher)) method of teaching these languages and stumbled across [his translation](https://archive.org/details/iliadhomerwitha00clargoog/page/n12/mode/2up) of the _Iliad_, which felt like a breath of fresh air. An interlinear translation is one where the translation of the word is placed directly below its appearance in the original. One problem with this is that word order in Greek can be completely different than in English, making it harder to comprehend. So Hamilton rearranges the words of the original in order to make them have a more natural flow in English.
![Hamilton's _Iliad_ Interlinear](/images/articles/introducing-lyceum/hamilton-interlinear.png)
Stumbling on this, it felt like the obvious solution. Here the text came _alive_ to me in my language. I was engaging _directly_ and _immediately_ with my purpose for learning it. Suddenly learning transformed from something tedious to something fun — learning the language _through_ the ancient texts is giving me the sensation of unlocking an ancient secret. Learning through made up stories designed to teach lessons, by contrast, felt like busywork that was obscuring my true goals. It was based on the idea that the _vocabulary_, not the grammar, was the primary blocker to learning a language. This method was pretty popular throughout the 19th century and was advocated by John Locke and others (as you can read in the intro to Hamilton's _Iliad_ above), and is still in use today by seminary students and many of the most popular apps for Latin learning ([Legentibus](https://legentibus.com/), for example, uses interlinears). This was a solution to problems 1, 3, and 4 above all in one.
So after learning the basics of declensions and verbs from _Athenaze_, I switched to solving problem 2: memorization. I painstakingly added Hamilton's cards by hand, word for word, into Anki. It was slow-going, but I found after a couple days I could actually read a sentence or two of the real _Iliad!_. And with each day, more.
Then, while listening to _The Story of Civilization, Vol. II: The Life of Greece_ by Will Durant, I heard the insane story of [Heinrich Schliemann](https://en.wikipedia.org/wiki/Heinrich_Schliemann). As an eight-year-old boy, his father read Homer to him, and little Schliemann became so enamored with it that he made a promise to his father that when he grew up he would rediscover the lost city. Around age thirty, a successful merchant, figuring he'd acquired enough wealth to fund his dream, he set out to find Troy. To the disbelief of many a scholar at the time, it appears that he [_actually did find it!_](https://en.wikipedia.org/wiki/Troy#). In a footnote, Durant includes Schliemann's journal where he describes how he learned the language, which was the final nudge I needed to inspire me to build Lyceum:
> "In order to acquire quickly the Greek vocabulary," Schliemann writes, "I procured a modern Greek translation of _Paul at Virginie_, and read it through, comparing every word with its equivalent in the French original. When I had finished this task I knew at least one half the Greek words the book contained; and after repeating the operation I knew them all, or nearly so, without having lost a single minute by being obliged to use a dictionary....Of the Greek grammar I learned only the declensions and the verbs, and never lost my precious time in studying its rules; for as I saw that boys, after being troubled and tormented for eight years and more in school with the tedious rules of grammar, can nevertheless none of them write a letter in ancient Greek without making hundreds of atrocious blunders, I thought the method pursued by the schoolmasters must be altogether wrong....I learned ancient Greek as I would have learned a living language."
Everything about this felt right to me: the learning by doing approach, not wasting time (academic methods seem to assume you want to make the study of the language a science, and expect you to learn it formally better than your native tongue. In short, they do _not_ respect your time), the observation on ineffectual tedium, the focus on learning it as a living language. It doesn't hurt that this is what Troy's discoverer advocated and that he also happened to know Russian, French, English, Dutch, Spanish, Portuguese, Italian, Swedish, Polish, Latin, Arabic, and his native German by the end of his life.
I now had enough intuition about what I felt was lacking to build the tool I wanted.
## Building the Website
I asked myself if I could build a website with a rich interface for reading and engaging with the original texts that is better than what currently exists. So I started building, with the following goals in mind:
1. The reading experience should start by at least achieving parity with current resources (Scaife, Loeb). This meant in practice that I needed English translations (multiple preferred, if available). This alone would let us achieve parity with the other viewers.
2. There are databases which exist to serve different purposes, but they have not been combined into a single intuitive interface. Combine freely-available, scholarly data in one place so that it can be maximally beneficial.
3. The site should tightly integrate spaced-repetition features into texts of the reader's choice as a proven effective method for memorization.
4. Use AI to do useful upfront work that it would take humans a long time to do (e.g. interlinear text generation). Like in Bitcoin, my primary field, proof-of-work is far easier to _validate_ than to _generate_. As long as we're up front about the limitations, there's no reason we can't use AI to generate initial interlinear texts which are then refined. I will go into more detail about my thoughts on using LLMs to generate interlinear translations and how I did it here in a subsequent post, but suffice it to say, they can achieve better results than you may expect.
I was aiming for something like a cross between the [Scaife Reader](https://scaife.perseus.org/) (which is difficult to use and more overtly academic) and [Legentibus](https://legentibus.com/) (which has proven very successful, but is more paywall-ed, more focused, and does Latin instead of Greek).
Using [Claude](https://claude.ai/) to build the site I was able to achieve all of these goals for at least a couple texts each, validating the concept. The one exception is that I need some mechanism and/or scholarly review to validate and correct mistakes in the interlinear texts. Thankfully, these can easily be compared in-site with the scholarly sourced definitions from LSJ, Perseus, and Logeion short definitions at any time for the time being.
## Tour of the Site
The homepage gives you an overview of the primary features.
![Lyceum homepage](/images/articles/introducing-lyceum/homepage.png)
Clicking "Browse All Texts" will take you to the `/browse` page, where you can search for texts by author or title, similar to Scaife Viewer:
![Browse page](/images/articles/introducing-lyceum/browse.png)
The corpus spans 373 authors and 1,837 works across all sources. Many of these will only contain the original Greek text with no translation for the time being, although I would love to fix that over time. The five texts at the top (Aesop's Fables, Gospel of John, Odyssey, Iliad, and Meditations) have been given special attention. These texts have interlinear translations and transliterations, and were what was primarily used for building and testing the site. If we click on one, we can see the reader view:
![Reader view](/images/articles/introducing-lyceum/reader.png)
There's a lot to unpack here. On the "Display Options" card, we have checkboxes which we can use to add morphology information (sourced from Perseus) to each word to help with our understanding of the grammar. For example, there is a "Color by Part of Speech", "Color Inflection", and "Show Inflection" option. For each word, there is a popup you can click on to see that word's definition from various sources (Logeion, Perseus, and LSJ), and grammar. You can also see multiple translations side by side with the original Greek when they are available. Any of these options can be combined to your reading taste. As far as I'm aware, there is currently no Ancient Greek resource which combines dictionary definitions, grammar/morphology, and side by side translations for free.
![Morphology and display options](/images/articles/introducing-lyceum/morphology-options.png)
![Side-by-side translation](/images/articles/introducing-lyceum/side-by-side.png)
![Word popup](/images/articles/introducing-lyceum/word-popup.png)
For paid users, the page gets more interesting and useful. Two new checkboxes unlock in "Display Options": "Show Interlinear" and "Show Transliteration" both generated by Claude. Transliterations are useful to show the Greek characters in Latin text, and thus help beginners to memorize the pronunciation of the Greek characters:
![Transliteration view](/images/articles/introducing-lyceum/transliteration.png)
Interlinear texts, as discussed earlier, give you direct word-by-word translations _in context_, so that you are no longer left guessing which of the many Perseus/LSJ definitions apply in the context of the book you are reading.
![Interlinear view](/images/articles/introducing-lyceum/interlinear.png)
Because these are LLM-generated and not scholarly-reviewed translations, you should compare them against their Logeion/Perseus/LSJ counterparts to be certain, but I have found they are usually accurate for these curated texts. You can compare them easily via the popup:
![Popup comparison with interlinear](/images/articles/introducing-lyceum/popup-comparison.png)
I am working on ways to better ensure/enhance the accuracy of these AI-generated interlinears, but in the meantime, this built-in cross-validation with the official scholarly sources should suffice.
These interlinear translations, combined with official translations and scholarly-source cross checking, make for a very powerful interface for reading.
There is one other feature that I'm particularly fond of: Anki-style spaced-repetition. Paid users can select a section from any text and add those words to a deck, where they can later practice memorizing them.
![SRS deck selection](/images/articles/introducing-lyceum/srs-deck.png)
Then, clicking on the "Study" tab will take you to:
![Study tab](/images/articles/introducing-lyceum/study-tab.png)
Now, you will be presented with cards like this:
Front:
![Card front](/images/articles/introducing-lyceum/card-front.png)
Back:
![Card back](/images/articles/introducing-lyceum/card-back.png)
You can even click "View in context" to have it highlight occurrences of that word in the text you added it from!
![View word in context](/images/articles/introducing-lyceum/view-in-context.png)
Finally, we have a dashboard with a few charts to track your learning progress:
![Dashboard](/images/articles/introducing-lyceum/dashboard.png)
# Conclusion
Lyceum is a rich modern reader for reading Ancient Greek texts with a novel combination of features to let you access the Greek authors of Antiquity in their native tongue. It is meant as a helpful, intuitive, and engaging resource to other means of learning Greek. It is under active development and new features and fixes will be added regularly.
Ultimately I'm building this because I want it to exist. I am the first customer. I now use the spaced-repetition and interlinear features every day and am learning the words from the _Iliad_, _Odyssey_ and Aesop's _Fables_ using it. Every day I can read a little more, and I am genuinely excited by my progress. I will continue to add features I (and others) think are useful, I'm sure there are plenty of mistakes to correct and improvements to be made that I'm missing.
If you want to have a voice in Lyceum's development or provide feedback to the project please join the [Discord](https://discord.gg/mnvAS6WUzz) or send an email to <support@lyceum.quest> with any suggestions you have or mistakes you find.
Future goals include:
- Fix any missing or incorrect dictionary definitions, with a user-driven feedback mechanism.
- Add many more interlinear translations and transliterations especially for popular texts.
- A feedback mechanism for evaluating/correcting LLM-generated interlinear translations/transliterations.
- Full English/other language translations for a wider variety of texts (many currently only have the Greek).
- Voice/Audio enabled reads (like Legentibus) for pronunciation.
- Make texts downloadable in nice readable formats. A fun thing might even be to allow PDF generation in whatever display format options you've selected for offline reading.
- Some day, the above could evolve into creating books/printouts with rich morphological information per word for certain texts (for example, a book could include interlinear translations, per-word grammar information). No publisher seems to do this as of now.
### Gratitude for Previous Work
Ross Scaife for laying the groundwork which led to the [Scaife Viewer](https://scaife.perseus.org/), which was a huge inspiration. The [Vesuvius Challenge](https://scrollprize.org/), which stirred my imagination with an excitement about rediscovering the past, the Loeb Library for their excellent books, Anki for being the only widely-adopted spaced-repetition software, Athenaze for being the book I started learning Greek with, and Legentibus for setting the gold standard for modern Latin learning and being a great resource to take inspiration from.
### Data Sources:
#### Texts
- [Perseus Digital Library](https://github.com/PerseusDL/canonical-greekLit) Greek texts and English translations from canonical-greekLit repository. 50+ authors, 631 works, from Homer through the Church Fathers.
- [Diorisis Ancient Greek Corpus](https://figshare.com/articles/dataset/The_Diorisis_Ancient_Greek_Corpus/6187256): 820 pre-lemmatized XML texts with word-level morphology, POS tagging, and sentence boundaries. Used to aid in generating contextual meanings.
- [First1K Greek Project](https://github.com/OpenGreekAndLatin/First1KGreek): Additional Greek texts from the Open Greek and Latin project at the University of Leipzig.
- [Chambry Aesopica](http://www.mythfolklore.net/aesopica/): Chambry's critical edition of Aesop's Fables with Perry numbering, multiple recensions, and scholarly apparatus.
- [Townsend Aesop Translation](https://www.gutenberg.org/ebooks/21): George Fyler Townsend's 1867 English translation of Aesop's Fables, used for parallel reading.
#### Morphology & Glosses
- [Perseus Ancient Greek Dependency Treebank](https://github.com/PerseusDL/treebank_data) Syntactic annotations (dependency trees) and glosses for select texts including Aesop, Homer, and Attic prose. Used for word-level alignment with translations.
- [Diogenes](https://github.com/pjheslin/diogenes): Morphological analyses for 400K+ Greek word forms. Each entry maps an inflected form to its lemma, part of speech, and full grammatical analysis (case, number, gender, tense, voice, mood, person).
#### Definitions
- [LSJ](https://lsj.gr/): 116,000+ dictionary entries from the definitive Ancient Greek lexicon (9th edition, 1940). Full scholarly definitions with citations, etymology, and usage notes. The "gold standard" comprehensive Greek definitions.
#### Claude AI
- **Content**: AI-generated contextual glosses that analyze word meaning within specific passages. Used to disambiguate words with multiple meanings (e.g., "λόγος" as "word" vs "argument" vs "reason" depending on context).

View File

@@ -9,6 +9,8 @@ tags: software
## **Articles**
- ### [Introducing Lyceum: A Modern Interface for Reading Ancient Greek Texts](/posts/articles/2026-02-23-introducing-lyceum)
- ### [Fear to Attempt](/posts/articles/2025-06-20-fear-to-attempt)
- ### [Payjoin for a Better Bitcoin Future](/posts/articles/2023-10-31-payjoin-better-future)

View File

@@ -15,3 +15,9 @@ name = "path"
version = "0.1.0"
source = "path:../../packages/path"
[[package]]
name = "ssg"
version = "0.1.0"
source = "path:../../packages/packages/ssg"
dependencies = ["markdown", "frontmatter", "path"]

View File

@@ -4,6 +4,7 @@ version = "0.1.0"
description = "A Lux project"
[dependencies]
markdown = { version = "0.1.0", path = "../../packages/markdown" }
frontmatter = { version = "0.1.0", path = "../../packages/frontmatter" }
path = { version = "0.1.0", path = "../../packages/path" }
ssg = { version = "0.1.0", path = "../../packages/packages/ssg" }
markdown = { version = "0.1.0", path = "../../packages/packages/markdown" }
frontmatter = { version = "0.1.0", path = "../../packages/packages/frontmatter" }
path = { version = "0.1.0", path = "../../packages/packages/path" }

232
main.lux
View File

@@ -1,16 +1,10 @@
import ssg
import markdown
import frontmatter
import path
type SiteConfig =
| SiteConfig(String, String, String, String, String, String, String)
type Page =
| Page(String, String, String, String, String)
type TagEntry =
| TagEntry(String, String, String, String, String)
fn loadConfig(path: String): SiteConfig with {File} = {
let raw = File.read(path)
let json = match Json.parse(raw) {
@@ -96,88 +90,6 @@ fn cfgStaticDir(c: SiteConfig): String =
SiteConfig(_, _, _, _, _, _, sd) => sd,
}
fn pgDate(p: Page): String =
match p {
Page(d, _, _, _, _) => d,
}
fn pgTitle(p: Page): String =
match p {
Page(_, t, _, _, _) => t,
}
fn pgSlug(p: Page): String =
match p {
Page(_, _, s, _, _) => s,
}
fn pgTags(p: Page): String =
match p {
Page(_, _, _, t, _) => t,
}
fn pgContent(p: Page): String =
match p {
Page(_, _, _, _, c) => c,
}
fn teTag(e: TagEntry): String =
match e {
TagEntry(t, _, _, _, _) => t,
}
fn teTitle(e: TagEntry): String =
match e {
TagEntry(_, t, _, _, _) => t,
}
fn teDate(e: TagEntry): String =
match e {
TagEntry(_, _, d, _, _) => d,
}
fn teSlug(e: TagEntry): String =
match e {
TagEntry(_, _, _, s, _) => s,
}
fn teSection(e: TagEntry): String =
match e {
TagEntry(_, _, _, _, s) => s,
}
fn slugFromFilename(filename: String): String = path.stripExtension(filename)
fn formatDate(isoDate: String): String = {
if String.length(isoDate) < 10 then isoDate else {
let year = String.substring(isoDate, 0, 4)
let month = String.substring(isoDate, 5, 7)
let day = String.substring(isoDate, 8, 10)
let monthName = if month == "01" then "Jan" else if month == "02" then "Feb" else if month == "03" then "Mar" else if month == "04" then "Apr" else if month == "05" then "May" else if month == "06" then "Jun" else if month == "07" then "Jul" else if month == "08" then "Aug" else if month == "09" then "Sep" else if month == "10" then "Oct" else if month == "11" then "Nov" else "Dec"
year + " " + monthName + " " + day
}
}
fn basename(p: String): String = path.basename(p)
fn dirname(p: String): String = path.dirname(p)
fn sortInsert(sorted: List<Page>, item: Page): List<Page> = insertByDate(sorted, item)
fn sortByDateDesc(items: List<Page>): List<Page> = List.fold(items, [], sortInsert)
fn insertByDate(sorted: List<Page>, item: Page): List<Page> = {
match List.head(sorted) {
None => [item],
Some(first) => if pgDate(item) >= pgDate(first) then List.concat([item], sorted) else match List.tail(sorted) {
Some(rest) => List.concat([first], insertByDate(rest, item)),
None => [first, item],
},
}
}
fn convertMd(text: String): String = markdown.toHtml(text)
fn htmlHead(title: String, description: String): String = "<!doctype html><html lang=\"en\"><head>" + "<title>" + title + "</title>" + "<meta charset=\"utf-8\">" + "<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">" + "<meta name=\"description\" content=\"" + description + "\">" + "<meta property=\"og:title\" content=\"" + title + "\">" + "<meta property=\"og:description\" content=\"" + description + "\">" + "<meta property=\"og:type\" content=\"website\">" + "<meta property=\"og:url\" content=\"https://blu.cx\">" + "<meta property=\"og:image\" content=\"https://blu.cx/images/social-card.png\">" + "<meta property=\"og:site_name\" content=\"Brandon Lucas\">" + "<meta name=\"twitter:card\" content=\"summary_large_image\">" + "<meta name=\"twitter:title\" content=\"" + title + "\">" + "<meta name=\"twitter:description\" content=\"" + description + "\">" + "<link rel=\"canonical\" href=\"https://blu.cx\">" + "<link rel=\"preload\" href=\"/fonts/EBGaramond-Regular.woff2\" as=\"font\" type=\"font/woff2\" crossorigin=\"\">" + "<link rel=\"preload\" href=\"/fonts/UnifrakturMaguntia-Regular.woff2\" as=\"font\" type=\"font/woff2\" crossorigin=\"\">" + "<link href=\"/styles.css\" rel=\"stylesheet\" type=\"text/css\">" + "<link href=\"/highlight/tokyo-night-dark.min.css\" rel=\"stylesheet\" type=\"text/css\">" + "<script src=\"/highlight/highlight.min.js\" defer=\"\"></script>" + "<script>document.addEventListener('DOMContentLoaded', function() \{ hljs.highlightAll(); \});</script>" + "</head>"
fn htmlNav(): String = "<a href=\"/\"><img src=\"/images/favicon.webp\" alt=\"Narsil Logo\" width=\"59\" height=\"80\"></a>"
@@ -196,74 +108,34 @@ fn htmlPostEntry(title: String, date: String, url: String): String = "<div class
fn htmlPostList(sectionTitle: String, postsHtml: String): String = "<div class=\"page w-[80%] flex flex-col gap-8\">" + "<h1 class=\"text-4xl text-center font-bold w-full\">" + sectionTitle + "</h1>" + "<div class=\"flex flex-col gap-4\">" + postsHtml + "</div></div>"
fn htmlHomePage(siteTitle: String, snippetsHtml: String): String = "<h1 class=\"unifrakturmaguntia-regular text-6xl text-center w-full\">" + siteTitle + "</h1>" + "<div class=\"font-bold text-4xl italic text-center w-full\">" + "Βράνδων Λουκᾶς" + "</div>" + "<h2 class=\"text-xl text-center flex flex-col\">" + "<span>Bitcoin Lightning Payments @ voltage.cloud</span>" + "<span>Bitcoin Privacy &amp; Scalability @ payjoin.org.</span>" + "<span>Love sovereign software &amp; history.</span>" + "<span>Learning Nix, Elm, Rust, Ancient Greek and Latin.</span>" + "</h2>" + "<div class=\"flex flex-col gap-4 w-full\">" + snippetsHtml + "</div>"
fn htmlWelcome(): String = "<div class=\"flex flex-col gap-4 border border-gray-500 p-8 rounded-sm max-h-150 overflow-y-auto text-wrap break-words\"><span class=\"flex flex-col gap-4\"><span class=\"text-center\">Welcome!</span><span class=\"text-center\">I'm a software builder by trade who's interested in too many things for my own good.</span><span class=\"text-center\">Here's a sample:</span><ul class=\"list-outside ml-8\"><li class=\"list-disc\">Free and Open Source Software (FOSS): Bitcoin, Lightning Network, Payjoin, Linux, GrapheneOS, VPNs, etc.</li><li class=\"list-disc\">History: Ancient Greek, Roman, American Revolution, and more.)</li><li class=\"list-disc\">Biographies: Adams, Hamilton, Washington, Franklin, Oppenheimer, Ramanujan and more</li><li class=\"list-disc\">Philosophy, psychology, Christianity: Influenced by Cicero, Nietzsche, Karl Popper, Dostoevsky, Will Durant, Oliver Sacks, Jung, Seneca, and more. Attempting to read Kierkegaard, but finding it impenetrably difficult yet joyful.)</li><li class=\"list-disc\">Languages: I'm currently learning Ancient Greek and Latin.</li><li class=\"list-disc\">Fun: Bass guitar</li></ul></span></div>"
fn htmlHomePage(siteTitle: String, snippetsHtml: String): String = "<h1 class=\"unifrakturmaguntia-regular text-6xl text-center w-full\">" + siteTitle + "</h1>" + "<div class=\"font-bold text-4xl italic text-center w-full\">" + "Βράνδων Λουκᾶς" + "</div>" + "<h2 class=\"text-xl text-center flex flex-col\">" + "<span>Bitcoin Lightning Payments @ voltage.cloud</span>" + "<span>Bitcoin Privacy &amp; Scalability @ payjoin.org.</span>" + "<span>Love sovereign software &amp; history.</span>" + "<span>Learning Nix, Elm, Rust, Ancient Greek and Latin.</span>" + "</h2>" + "<div class=\"flex flex-col gap-4 w-full\">" + htmlWelcome() + snippetsHtml + "</div>"
fn htmlSnippetCard(content: String): String = "<div class=\"flex flex-col gap-4 border border-gray-500 p-8 rounded-sm max-h-150 overflow-y-auto text-wrap break-words\">" + "<div><div class=\"markdown\">" + content + "</div></div>" + "</div>"
fn htmlTagPage(tagName: String, postsHtml: String): String = "<div class=\"page w-[80%] flex flex-col gap-8\">" + "<h1 class=\"text-4xl text-center font-bold w-full\">Tag: " + tagName + "</h1>" + "<div class=\"flex flex-col gap-4\">" + postsHtml + "</div></div>"
fn parseFile(path: String): Page with {File} = {
let raw = File.read(path)
let doc = frontmatter.parse(raw)
let title = frontmatter.title(doc)
let date = frontmatter.date(doc)
let tags = frontmatter.getOrDefault(doc, "tags", "")
let body = frontmatter.body(doc)
let htmlContent = convertMd(body)
let filename = basename(path)
let slug = slugFromFilename(filename)
Page(date, title, slug, tags, htmlContent)
}
fn mapParseFiles(dir: String, files: List<String>): List<Page> with {File} =
match List.head(files) {
None => [],
Some(filename) => {
let page = parseFile(dir + "/" + filename)
match List.tail(files) {
Some(rest) => List.concat([page], mapParseFiles(dir, rest)),
None => [page],
}
},
}
fn readSection(contentDir: String, section: String): List<Page> with {File} = {
let dir = contentDir + "/" + section
if File.exists(dir) then {
let entries = File.readDir(dir)
let mdFiles = List.filter(entries, fn(e: String): Bool => String.endsWith(e, ".md"))
mapParseFiles(dir, mdFiles)
} else []
}
fn ensureDir(path: String): Unit with {File} = {
if File.exists(path) then () else {
let parent = dirname(path)
if parent != "." then if parent != path then ensureDir(parent) else () else ()
File.mkdir(path)
}
}
fn writePostPage(outputDir: String, section: String, page: Page, siteTitle: String, siteDesc: String): Unit with {File} = {
let slug = pgSlug(page)
let title = pgTitle(page)
let date = pgDate(page)
let tagsRaw = pgTags(page)
let content = pgContent(page)
fn writePostPage(outputDir: String, section: String, page: Post, siteTitle: String, siteDesc: String): Unit with {File} = {
let slug = ssg.postSlug(page)
let title = ssg.postTitle(page)
let date = ssg.postDate(page)
let tagsRaw = ssg.postTags(page)
let content = ssg.postContent(page)
let tags = if tagsRaw == "" then [] else String.split(tagsRaw, " ")
let tagsHtml = renderTagLinks(tags)
let formattedDate = formatDate(date)
let formattedDate = ssg.formatDate(date)
let body = htmlPostPage(title, formattedDate, tagsHtml, content)
let pageTitle = title + " | " + siteTitle
let html = htmlDocument(pageTitle, siteDesc, body)
let dir = outputDir + "/posts/" + section + "/" + slug
ensureDir(dir)
ssg.ensureDir(dir)
File.write(dir + "/index.html", html)
}
fn writeSectionIndex(outputDir: String, section: String, pages: List<Page>, siteTitle: String, siteDesc: String): Unit with {File} = {
let sorted = sortByDateDesc(pages)
let postEntries = List.map(sorted, fn(page: Page): String => htmlPostEntry(pgTitle(page), formatDate(pgDate(page)), "/posts/" + section + "/" + pgSlug(page)))
fn writeSectionIndex(outputDir: String, section: String, pages: List<Post>, siteTitle: String, siteDesc: String): Unit with {File} = {
let sorted = ssg.sortByDateDesc(pages)
let postEntries = List.map(sorted, fn(page: Post): String => htmlPostEntry(ssg.postTitle(page), ssg.formatDate(ssg.postDate(page)), "/posts/" + section + "/" + ssg.postSlug(page)))
let postsHtml = String.join(postEntries, "
")
let sectionName = if section == "articles" then "Articles" else if section == "blog" then "Blog" else if section == "journal" then "Journal" else section
@@ -271,36 +143,21 @@ fn writeSectionIndex(outputDir: String, section: String, pages: List<Page>, site
let pageTitle = sectionName + " | " + siteTitle
let html = htmlDocument(pageTitle, siteDesc, body)
let dir = outputDir + "/posts/" + section
ensureDir(dir)
ssg.ensureDir(dir)
File.write(dir + "/index.html", html)
}
fn collectTagsForPage(section: String, page: Page): List<TagEntry> = {
let tagsRaw = pgTags(page)
let tags = if tagsRaw == "" then [] else String.split(tagsRaw, " ")
List.map(tags, fn(tag: String): TagEntry => TagEntry(tag, pgTitle(page), pgDate(page), pgSlug(page), section))
}
fn collectTags(section: String, pages: List<Page>): List<TagEntry> = {
let nested = List.map(pages, fn(page: Page): List<TagEntry> => collectTagsForPage(section, page))
List.fold(nested, [], fn(acc: List<TagEntry>, entries: List<TagEntry>): List<TagEntry> => List.concat(acc, entries))
}
fn addIfUnique(acc: List<String>, e: TagEntry): List<String> = if List.any(acc, fn(t: String): Bool => t == teTag(e)) then acc else List.concat(acc, [teTag(e)])
fn getUniqueTags(entries: List<TagEntry>): List<String> = List.fold(entries, [], addIfUnique)
fn tagEntryToHtml(e: TagEntry): String = htmlPostEntry(teTitle(e), formatDate(teDate(e)), "/posts/" + teSection(e) + "/" + teSlug(e))
fn tagEntryToHtml(e: TagEntry): String = htmlPostEntry(ssg.tagTitle(e), ssg.formatDate(ssg.tagDate(e)), "/posts/" + ssg.tagSection(e) + "/" + ssg.tagSlug(e))
fn writeOneTagPage(outputDir: String, tag: String, allTagEntries: List<TagEntry>, siteTitle: String, siteDesc: String): Unit with {File} = {
let entries = List.filter(allTagEntries, fn(e: TagEntry): Bool => teTag(e) == tag)
let entries = ssg.entriesForTag(allTagEntries, tag)
let postsHtml = String.join(List.map(entries, tagEntryToHtml), "
")
let body = htmlTagPage(tag, postsHtml)
let pageTitle = "Tag: " + tag + " | " + siteTitle
let html = htmlDocument(pageTitle, siteDesc, body)
let dir = outputDir + "/tags/" + tag
ensureDir(dir)
ssg.ensureDir(dir)
File.write(dir + "/index.html", html)
}
@@ -316,15 +173,15 @@ fn writeTagPagesLoop(outputDir: String, tags: List<String>, allTagEntries: List<
},
}
fn writeTagPages(outputDir: String, allTagEntries: List<TagEntry>, siteTitle: String, siteDesc: String): Unit with {File, Console} = {
let uniqueTags = getUniqueTags(allTagEntries)
fn writeTagPages(outputDir: String, allTagEntries: List<TagEntry>, siteTitle: String, siteDesc: String): Unit with {File} = {
let uniqueTags = ssg.uniqueTags(allTagEntries)
writeTagPagesLoop(outputDir, uniqueTags, allTagEntries, siteTitle, siteDesc)
}
fn renderSnippetFile(snippetDir: String, filename: String): String with {File} = {
let raw = File.read(snippetDir + "/" + filename)
let doc = frontmatter.parse(raw)
htmlSnippetCard(convertMd(frontmatter.body(doc)))
htmlSnippetCard(markdown.toHtml(frontmatter.body(doc)))
}
fn renderSnippets(snippetDir: String, files: List<String>): List<String> with {File} =
@@ -339,32 +196,35 @@ fn renderSnippets(snippetDir: String, files: List<String>): List<String> with {F
},
}
fn insertString(sorted: List<String>, item: String): List<String> = {
match List.head(sorted) {
None => [item],
Some(first) => if item <= first then List.concat([item], sorted) else match List.tail(sorted) {
Some(rest) => List.concat([first], insertString(rest, item)),
None => [first, item],
}
}
}
fn sortStrings(items: List<String>): List<String> = List.fold(items, [], insertString)
fn writeHomePage(outputDir: String, contentDir: String, siteTitle: String, siteDesc: String): Unit with {File} = {
let snippetDir = contentDir + "/snippets"
let snippetEntries = if File.exists(snippetDir) then {
let entries = File.readDir(snippetDir)
List.filter(entries, fn(e: String): Bool => String.endsWith(e, ".md"))
let mdFiles = List.filter(entries, fn(e: String): Bool => String.endsWith(e, ".md"))
sortStrings(mdFiles)
} else []
let snippetCards = renderSnippets(snippetDir, snippetEntries)
let firstCard = match List.head(snippetCards) {
Some(c) => c,
None => "",
}
let restCards = match List.tail(snippetCards) {
Some(rest) => rest,
None => [],
}
let gridHtml = "<div class=\"grid grid-cols-1 md:grid-cols-2 gap-4\">" + String.join(restCards, "
let gridHtml = "<div class=\"grid grid-cols-1 md:grid-cols-2 gap-4\">" + String.join(snippetCards, "
") + "</div>"
let snippetsHtml = firstCard + "
" + gridHtml
let body = htmlHomePage(siteTitle, snippetsHtml)
let body = htmlHomePage(siteTitle, gridHtml)
let pageTitle = "Bitcoin Lightning Developer & Privacy Advocate | " + siteTitle
let html = htmlDocument(pageTitle, siteDesc, body)
File.write(outputDir + "/index.html", html)
}
fn writeAllPostPages(outputDir: String, section: String, pages: List<Page>, siteTitle: String, siteDesc: String): Unit with {File} =
fn writeAllPostPages(outputDir: String, section: String, pages: List<Post>, siteTitle: String, siteDesc: String): Unit with {File} =
match List.head(pages) {
None => (),
Some(page) => {
@@ -389,11 +249,11 @@ fn main(): Unit with {File, Console, Process} = {
Console.print("Content: " + contentDir)
Console.print("Output: " + outputDir)
Console.print("")
ensureDir(outputDir)
ssg.ensureDir(outputDir)
Console.print("Reading content...")
let articles = readSection(contentDir, "articles")
let blogPosts = readSection(contentDir, "blog")
let journalPosts = readSection(contentDir, "journal")
let articles = ssg.readSection(contentDir, "articles")
let blogPosts = ssg.readSection(contentDir, "blog")
let journalPosts = ssg.readSection(contentDir, "journal")
Console.print(" Articles: " + toString(List.length(articles)))
Console.print(" Blog posts: " + toString(List.length(blogPosts)))
Console.print(" Journal entries: " + toString(List.length(journalPosts)))
@@ -407,9 +267,9 @@ fn main(): Unit with {File, Console, Process} = {
writeSectionIndex(outputDir, "blog", blogPosts, siteTitle, siteDesc)
writeSectionIndex(outputDir, "journal", journalPosts, siteTitle, siteDesc)
Console.print("Writing tag pages...")
let articleTags = collectTags("articles", articles)
let blogTags = collectTags("blog", blogPosts)
let journalTags = collectTags("journal", journalPosts)
let articleTags = ssg.collectTags("articles", articles)
let blogTags = ssg.collectTags("blog", blogPosts)
let journalTags = ssg.collectTags("journal", journalPosts)
let allTags = List.concat(List.concat(articleTags, blogTags), journalTags)
writeTagPages(outputDir, allTags, siteTitle, siteDesc)
Console.print("Writing home page...")

Binary file not shown.

Before

Width:  |  Height:  |  Size: 459 KiB

After

Width:  |  Height:  |  Size: 362 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 176 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 844 KiB

After

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 656 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 464 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 403 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 212 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 232 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 376 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 168 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 498 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 499 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 418 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 452 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 277 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 499 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 442 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 196 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 447 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 348 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 325 KiB

File diff suppressed because one or more lines are too long