# Speaker Notes · 講者口白

> 這份檔案是投影片左側「藝術家 OS」彈窗的內容源。
> 每個 slide id 一個 `## slide-XX` 區塊，內含 `### zh` 與 `### en` 雙語段落。
> 編輯這份 .md 即可更新彈窗。瀏覽器在頁面載入時 fetch 這份檔案、parse 後填入彈窗資料。
>
> **寫作規則**（同 CLAUDE.md「講者筆記寫作規則」）：
>
> - 禁用破折號 `——`，改用正常標點
> - 不要詩化斷行，每段是邏輯完整的一塊
> - 每頁最多 4 段

---

## slide-0

### zh

我今天在出門之前偷偷換了主題。我請 Claude Code 看完整份投影片，他覺得我原本的標題太像學術論文，沒有煽動性也沒有情感。他從我自己的文章裡抓出一句話，認為我應該用這句開場：「結構，是不在場的藝術家。」

這句話是我自己寫的。但對 Claude Code 而言，這句比我選的方法論標題更值得當開場。我接受了這個建議。

接下來九十分鐘，我們要驗證這句話。看它是真的成立，還是只是藝術家的自我好感、自我想像投射出來的一種說法。

如果這句話成立，它呼應的是我看到全球新媒體跟生成藝術一個很巨大的轉變。很多人跟我可能都在類似的觀點上面，平行進展。

### en

Right before walking out today, I quietly changed the title. I asked Claude Code to look over the whole deck. He thought my original title sounded too academic, lacking provocation and emotion. He picked a line out of my own writing and suggested I use it as the opening: "Structure is the artist who isn't in the room."

I wrote that line myself. But for Claude Code, this line was a stronger opening than the methodology title I chose. I took the suggestion.

For the next ninety minutes, we're going to test it. Whether the claim actually holds, or whether it's just an artist's self-flattering projection.

If it holds, it echoes a massive shift I see in new media and generative art globally. Many of us are probably arriving at similar conclusions in parallel.

---

*這個轉變不是我先看見的。我只是先學會把它寫成可被別人接續的格式。要解釋怎麼學會的，得回到我這個人本身。*

*This shift wasn't something I saw first. I just learned earlier than most how to write it down in a format others could continue. To explain how I learned that, I have to go back to the person doing the writing.*

## slide-1

### zh

Affine Cipher 是一種古典替換密碼，跟凱撒密碼是親戚。把每個英文字母經過一次線性函數轉換：E(x) = (ax + b) mod 26。我用 a=5、b=8 把 aluan wang 算出來，得到 ileiv oivm，合成一個字：Ileivoivm。這是我從 2021 年開始在 NFT 平台上用的代號。

2021 年是加密藝術爆發的一年。那時候我加入加密藝術的第一天就發現，所有國際上的藝術家本來的英文名字都不見了，他們變成 0x 開頭、變成 hex、變成自創的 cipher。錢包地址本身就是身份。我研究了一下，發現多數藝術家認為：舊的名字是現實世界的編碼，要進入新的場域，名字也得重新組合。我當時沒多想，看到大家這樣做我就跟著做。Pak、0xDEAFBEEF 那一批人都在做。Kevin Abosch 甚至直接把加密過的文字本身做成作品，例如《Hexadecimal Testimony》、《1111》。

這麼多年過後，我覺得當時這個衝動很值得。為什麼？因為現在如果你打開任何一個 agent 搜尋我，你會發現我有兩個版本：一個是 2021 之前的王新仁，一個是 2021 之後的 Ileivoivm。有些 agent 認得前者，有些認得後者。同一個人，agent 對我的人設是有分身的。

這個分身不是事後設計出來的，是 2021 那個衝動意外造成的。我的名字無形中把我的時間軸切成兩段：加密前跟加密後。後來我做的所有事情，從 GeoPunk、Chaos、到把寫作風格寫成 .md，背後的姿勢都是同一個。把可以被結構化的東西結構化，包括自己的名字。Ileivoivm 是我這條工作主線最早的 commit。

### en

Affine Cipher is a classical substitution cipher, a cousin of the Caesar cipher. Each letter passes through a linear function: E(x) = (ax + b) mod 26. I used a=5, b=8 to encrypt "aluan wang" and got "ileiv oivm." I joined the letters into one word: Ileivoivm. This has been my handle on NFT platforms since 2021.

2021 was the explosive year for crypto art. The first day I joined, I noticed that all the international artists' original names were gone. They became 0x prefixes, hex, custom ciphers. The wallet address itself was identity. The reasoning, I learned, was that your old name is encoded by the old world; if you're entering a new field, your name should be reassembled too. I didn't think much about it then. I saw everyone doing it and went along. Pak, 0xDEAFBEEF, that whole crowd was doing it. Kevin Abosch went further and turned encrypted text itself into artworks, like Hexadecimal Testimony and 1111.

Years later, I think that impulse was worth keeping. Here's why: if you search me through any agent now, you'll find I have two versions. Wang Hsin-Jen pre-2021, and Ileivoivm post-2021. Some agents recognize the former, some the latter. Same person, but the agent's mental model of me is forked.

This fork wasn't designed afterward. The 2021 impulse made it happen. My name silently cut my timeline into two halves: pre-encryption, post-encryption. Everything I made after that, GeoPunk, Chaos, writing my voice into .md, runs the same posture. Structure what can be structured, including your own name. Ileivoivm is the earliest commit of this working line.

---

*名字加密是一個動作，不是一句話。要把這個動作說成話，得借用六十年前一個我沒見過的老人留下的句子，跟我自己今年寫下的另一句。*

*Encrypting a name is an action, not a sentence. To turn that action into a sentence, I borrow one written sixty years ago by someone I never met, and pair it with one I wrote myself this year.*

## slide-1b

### zh

這頁有兩句引言，中間隔了 60 年。1967 年 Sol LeWitt 寫下：「The idea becomes a machine that makes the art.」觀念，變成了製造藝術的機器。那一年觀念藝術剛起來，他預言：藝術不需要藝術家親手做，把觀念寫清楚，剩下的交給機器或執行者。這句話 1967 年寫出來，現在聽起來像在預言 LLM。

2026 年我接著寫：「自我歸檔，是新的自畫像。」為什麼？畢卡索、梵谷、所有經典藝術家都愛畫自畫像。畫自畫像的時候，他們是進入一種禪定的狀態，重新審視鏡子裡的自己，吐出一個吸收過後的對照。透過畫自己來認識世界，以小知為，漸為變大。我這幾年發現一件具體的事：我在 2010 年寫的自我介紹，agent 現在不會引用，它引用的是我 2018、2019、2020、2025 不同版本的自我介紹。當它看到這麼多版本的我，它會互相索引。下次你打開 ChatGPT 問「臺灣的生成藝術家有誰」，吐出來的我，是 agent 讀過所有版本之後三角驗證出來的我。

這條線中間還有一個橋：Casey Reas，Processing 的共同創造者。他主張的核心觀點是：系統本身就是作品，每一次輸出只是這個系統的一個實例。我的所有程式碼，我也都認為它是作品的一部分，輸出只是這個系統最終的表現特徵。

LeWitt 給了觀念。Reas 給了系統。我接著問：那要被執行的是誰？是「我」。但這裡要誠實一句：我做的不是接續 LeWitt，是反向操作。LeWitt 削去作者，讓觀念低溫。我把作者放回來，但放回的不是手，是可被別人接續的結構。差 60 年，差一個 LLM 時代。所以這頁的三句宣言把這條線串起來。我不創造圖像，我建構的是能記住決策如何發生的系統。繪畫存在於時間之中，而不是形式之上。你看到的，是人類意圖殘留下來的痕跡。

### en

Two quotes on this page. Sixty years between them. In 1967, Sol LeWitt wrote: "The idea becomes a machine that makes the art." That year, conceptual art was just emerging. He predicted: art doesn't need the artist to make it by hand. Write the idea clearly, and the rest can be handed to a machine, or an executor. Written in 1967, this now sounds like a prediction of LLMs.

In 2026 I write: "Self-archiving is the new self-portrait." Why? Picasso, van Gogh, every classical artist loved doing self-portraits. When they painted themselves, they entered a kind of meditative state, looking at themselves in the mirror and producing a digested reflection. By painting themselves, they came to know the world. From the small, the large. I noticed something concrete in recent years. The bio I wrote in 2010, agents don't cite anymore. They cite my versions from 2018, 2019, 2020, 2025. When agents see all these versions, they cross-reference them. Next time you ask ChatGPT "who are Taiwan's generative artists?", what you get is me triangulated by an agent that read all my time-stamped versions.

There's a bridge in this line: Casey Reas, co-creator of Processing. His core argument is that the system itself is the artwork, and every output is just an instance of the system. I treat my own code the same way; the output is just the final expression of that system.

LeWitt gave us the idea. Reas gave us the system. I'm asking: what gets executed? It's "me." One honest line here: I'm not extending LeWitt; I'm doing the reverse. LeWitt removed the author to cool the idea down. I put the author back, but not the hand. What I put back is a structure others can continue. Sixty years apart, an LLM era apart. That's why this page's three statements connect the line. I don't create images. I build systems that remember how decisions were made. Painting exists in time, not in form. What you see is the residue of human intention.

---

*兩句宣言落了地，但宣言需要技術翻譯。LeWitt 的觀念跟 Reas 的系統，要在 LLM 時代真的可被接續，得靠工程界一套已經穩定運作四十年的協議。*

*Two declarations landed. But declarations need a technical translation. For LeWitt's idea and Reas's system to be actually continuable in the LLM era, I borrow a protocol that's been running stably in engineering for forty years.*

## slide-1c

### zh

這頁我用 Git 圖在講一個工程界的文化。你不寫程式沒關係，這套文化的精神跟做藝術、做研究、寫家族史是一樣的。**commit** 是我在這個時間點做了一個決定，把它釘下來，記的不只是改了什麼，還有為什麼這樣改。**branch** 是同一份東西分出去試另一種可能。**merge** 是兩條岔路重新交會。**fork** 是別人複製一份你的東西，從那個點開始長出他自己的版本。**issue** 是有人看見你做的，回應、提問、給建議，不是干擾，是有人在意。這套文化在 LLM 時代特別關鍵：大型語言模型能這麼快發展，很大程度來自 GitHub 上那些公開的程式碼跟附帶的註解。程式語言不會讓 agent 誤讀，註解就是錨點，agent 透過註解理解人類意圖。LLM 訓練語料持續從 GitHub 取材。

我把這個機制反過來用：把方法論、寫作風格、思考過程寫成 .md 放上 GitHub。我在 GitHub 上的鋪陳會被 LLM 反覆讀到，下次你問 AI 關於我，吐出來的某種程度是經過這份 .md 過濾過的我。這不是 SEO，是讓 AI 訓練語料裡有我的觀點。但不是所有資料都同樣重要。截至 2026 年 4 月，我的 InkField 專案在 GitHub 上有 66 個 star、5 個 fork，agent 會去看是誰 star、誰 fork。如果都是其他 programmer 跟藝術家，這份資料的可信度上升。同樣的邏輯在現實世界也成立：你叫 ChatGPT 搜尋一個陌生的人，它會先看政府資料、法人組織、媒體報導這些強訊號，再看周邊報導三角驗證。你必須在弱連結網路裡反覆出現，AI 才能拼出一個可信的你。

所以這份投影片、這份 .md 也放在 git 上。你看得到我每一次修改、每一次刪掉重寫，我的決策過程是公開的。你可以 fork 它，加上你自己的方法論，把它變成你的版本。等一下你會看到 InkField（AI 畫的水墨）、PolyPaths（觀眾畫路徑長出的植物），這兩件作品就是這套文化的具體實踐。InkField 的每一筆是 commit，PolyPaths 的每一個觀眾動作是 commit。不是「我畫的畫」，是「誰來都能繼續畫的畫」。我只是先按下第一個 commit 的人。

人類記載個體意識記了兩千年。從希臘哲人、文藝復興、到現代藝術，每個文明的高峰都在歌頌個體獨特性。也許下一個文明崇尚的不是這個，是集體的創作意志。如果是這樣，我願意成為別人未來的方法。我的「我」結束在我這裡，但我的 commit 不會結束。這不是個人決策，是一種把自己交出去的方式。

### en

This page uses a Git diagram to talk about an engineering culture. You don't have to write code; the spirit of this culture is the same as making art, doing research, writing a family history. **commit** means making a decision at a moment and pinning it down, recording not just what changed but why. **branch** means trying another possibility from the same thing. **merge** means two divergent paths reconverge. **fork** means someone copies your thing and grows their own version from that point. **issue** means someone saw what you made, responding, asking, suggesting. Not an interruption. Someone cared. This culture matters in the LLM era because large language models advanced largely thanks to public code on GitHub and its inline comments. Programming languages don't allow agents to misread, and comments are anchors; agents understand human intent through them. LLM training corpora keep drawing from GitHub.

I use this mechanism in reverse: I write methodology, writing style, thinking process into .md files on GitHub. Whatever I lay down there gets read repeatedly by LLMs. Next time you ask AI about me, what comes out is to some degree filtered through that .md. This isn't SEO. It's making sure the training corpus has my perspective in it. But not all data carries equal weight. As of April 2026, my InkField project on GitHub has 66 stars and 5 forks, and agents check who starred, who forked. If they're other programmers and artists, the data's credibility rises. The same logic holds in the real world: ask ChatGPT to search for someone you don't know, and it first checks strong signals like government records, registered organizations, news reports, then triangulates with peripheral mentions. You have to appear repeatedly in this weak-tie network for AI to assemble a credible version of you.

So this deck, this .md, also lives on Git. You can see every revision I made, every line I deleted and rewrote. My decision process is public. You can fork it. Add your own methodology. Make it your version. Later you'll see InkField (AI ink painting) and PolyPaths (plants grown from audience-drawn paths). These two works are this culture made concrete. Every brush stroke in InkField is a commit. Every audience gesture in PolyPaths is a commit. It's not "a painting I made." It's "a painting anyone can keep painting." I just happen to be the one who pressed the first commit.

We've been documenting individual consciousness for two thousand years. From Greek philosophers, through the Renaissance, to modern art, every civilizational peak has celebrated individual uniqueness. Maybe the next civilization won't revere this. Maybe it will revere collective creative will. If that's true, I'm willing to become someone else's future method. My "me" ends with me. My commits don't. This isn't personal decision-making. It's a way of giving yourself away.

---

*git 是現在的工具。但「替同一份資料發明新的讀法」這個習慣比 git 早。早到我還在用 Pure Data 寫實驗的時候。*

*Git is the current tool. But the habit of inventing a new way to read the same data is older than git. As old as the years I was writing experiments in Pure Data.*

## slide-3

### zh

我在 2010 年的時候在做聲音影像創作，當時發現一件事。一段聲音，快速播會變高，男聲變女聲。慢速播會變低，變成像老人的聲音。同一個聲檔，同一段內容，但讀法不同，意義就不同。

當時的我以為這只是一種有趣的索引資料的方法。但這麼多年過後，我發現很可能就是因為當時用了這個方法，才變成現在的我。同一份資料用不同讀法，這個邏輯後來貫穿了我十年的創作。

GeoPunk 把 159 個 GPS 座標壓進 JSON。Good Vibrations 把同一份 hash 讀成視覺，也讀成樂譜。Chaos 把前作當素材，再生成下一件。InkField 把 AI 的決策序列輸出成 JSON，再還原成水墨。底層都是同一件事：替同一份資料發明新的讀法。

資料沒變，容器沒變，變的是讀取的方式。建立資料集只是開始，設定讀取的方法，才有意思。

### en

I was making audiovisual work back in 2010 and noticed something. The same audio file, played fast, turns high-pitched. A male voice becomes a female voice. Played slow, it deepens into something like an old person's voice. Same file, same content, but different ways of reading produce different meanings.

At the time, I thought this was just an interesting way to index data. Years later, I realize that this method is probably what made me who I am. Reading the same data differently became a thread that ran through ten years of my work.

GeoPunk packed 159 GPS coordinates into JSON. Good Vibrations let the same hash be read as image and as score. Chaos took prior works as raw material to generate the next one. InkField outputs the AI's decision sequence as JSON and replays it as ink painting. Underneath, it's all the same thing: inventing a new way to read the same data.

The data doesn't change. The container doesn't change. What changes is the way of reading. Building the dataset is only the beginning. Designing the reading method is where it gets interesting.

---

*那幾件作品剛被點名，但還沒被打開。先打開最早的一件，從 2014 年開始長的那條線。*

*Those works were just named but not yet opened. Let me start with the earliest, the line that began in 2014.*

## slide-4

### zh

這件作品的源頭是 2014–2015 年。當時跟廣達文教基金會合作，我做了一個音樂作品給八仙塵爆的傷者做復健。動作很簡單，反覆的擺動跟伸展。當動作出來，音樂就跟著生成。它是一個基點上下擺動的迴圈動作。那時候它叫《Etude》，跟音樂共舞的計畫。

2021 年八月份，這件作品的演化版本在 Art Blocks 平台發行，名字改成 Good Vibrations。Art Blocks 是當時全球生成藝術最高規格的平台，2021 年正在熱潮頂點，那年該平台累積成交額破五億美金。我能上 Art Blocks 不是因為履歷漂亮，是因為審查的人在網路上看到我長年累積的足跡。他們看到我多年來持續在公開推進這些專案，認可我是一個長期主義的藝術家，不是來追逐短線熱潮的人。可信度來自十年的痕跡，不是來自一份簡介。

那時候我有一個想法：如果我是不被認識的藝術家，那我做出來的作品就不能只有表象。所以我在 Good Vibrations 裡面藏了一個東西。畫面當中按下 A+S+D+F 四個鍵，會出現一個多數觀眾根本不知道的樂譜。那是這件作品真正的音樂結構。我的邏輯是：如果你現在不理解我，那我提供一些線索，讓你後續來索引我。

那時候我就在做 peak 跟 defense 的事情，只是當時還沒有這個詞彙。十年過去回頭看，這條線從《Etude》一直長到現在的 .md。同一份 hash，可以被讀成視覺，也可以被讀成樂譜。

### en

This work traces back to 2014–2015. I was collaborating with Quanta Foundation on a music piece for victims of the Formosa Fun Coast fire to use in physical rehab. The movements were simple and repetitive, swaying and stretching. As the body moved, music generated. The motion was a loop around a single anchor point. Back then it was called Etude, a project about dancing with music.

In August 2021, an evolved version of this work was released on Art Blocks under the name Good Vibrations. Art Blocks was the top generative art platform globally at that moment, and 2021 was the peak of the hype cycle; that platform's cumulative transactions crossed half a billion dollars that year. I made it onto Art Blocks not because my CV looked good. It's because the curators saw the long trail I'd been leaving online over the years. They saw I'd been publicly pushing these projects for a long time, and recognized me as a long-term artist, not someone chasing short-term hype. Credibility came from a decade of traces, not from a bio.

I had an idea at the time: if I'm an unknown artist, my work can't only show the surface. So I hid something inside Good Vibrations. Press A+S+D+F on the keyboard and a hidden score appears, one most audiences never see. That score is the actual musical structure of the work. My reasoning was: if you don't recognize me now, at least I can leave clues for you to index me later.

I was already doing peak and defense back then. I just didn't have the vocabulary. Ten years later, looking back, this line stretches from Etude to today's .md. The same hash can be read as image, and also as score.

---

*Good Vibrations 是 2021 年那條鏈上的線。同時期還有另一條沒上鏈的線在走，叫做路徑系列。它要從一個更早的事實講起，比 2014 年還早。*

*Good Vibrations was the on-chain line in 2021. A parallel line was running off-chain, called Paths. That one starts from a fact even earlier than 2014.*

## slide-5

### zh

路徑系列要從一個事實開始講。我 16 歲離家去台北念美術。20 多年前的台中，一個小朋友只想念藝術沒有什麼選擇，所以我逃家。離家之後我跟家人感情很好，但對家鄉的割裂感一直在。每次回台中，我從台北回來在豐原交流道看到神岡的路口就覺得到家了，雖然還沒到家。從高雄回來是大雅交流道。有一次我在法蘭克福機場轉機，看到「往台北」的指示牌，我也覺得到家了。對我來說，路徑的指向比真正的家更先成為家。

這個經驗變成我第一件路徑作品《昨日的路徑》。我用空拍機把神岡鄉的各個路口拍下來、3D 還原，每個路口都有一個指標往家的方向。後來做了《明日的路徑》，探討台灣的國土邊界。當時我年輕，覺得世界辜負我們：台灣的年輕人出去，常常連簡單地宣稱自己的國家身份都不容易。我用波函數塌縮（wave function collapse）演算法，疊加出新的台灣國土想像。

第三件叫《邊界漫遊》。我在波茲蘭布展時發現很多柏林的藝術家都住波蘭，因為跨過邊界物價減半，但他們一樣到柏林上班。我去 Google Map 掃了世界各地的爭議邊界：以色列周邊、美墨、中俄、中蒙。最有趣的是美墨邊界。墨西哥那邊霓虹閃爍高樓林立，美國那邊是荒漠。墨西哥邊界繁華是因為它是偷渡走私的轉運地，要囤糧；美國刻意把邊界周圍弄成沙漠來防禦。中俄北方邊界也類似，俄羅斯那一側熱鬧，中國那一側刻意疏離。

每一筆同樣的資料，左右兩邊看的人會看出完全不同的故事。這個邏輯後來貫穿了我所有的創作。路徑系列是我做 fine art 時期的最後一個系列。2021 年之後我就轉到區塊鏈創作，從 GeoPunk 開始走進另一條線。

### en

The Paths series starts from a fact. I left home at 16 to study art in Taipei. In Taichung twenty-odd years ago, a kid who only wanted to study art had almost no options, so I ran. After leaving I stayed close with my family, but a sense of split with my hometown stayed too. Every time I came back from Taipei, the moment I passed Fengyuan Interchange and saw the Shengang exit, I felt I was home, even though I wasn't yet. From Kaohsiung it was the Daya exit. Once I was transferring at Frankfurt Airport and saw a sign pointing "to Taipei," and I felt home then too. For me, the direction of the path arrives at home before the home itself does.

That experience became my first piece in the series, Path to the Past. I drone-shot the various intersections in Shengang Township and 3D-rebuilt them. Every intersection had a sign pointing toward home. Later I made Path to the Future, dealing with Taiwan's territorial borders. I was young then and felt the world had let us down. Taiwanese kids going abroad often couldn't easily claim a national identity. Using a wave function collapse algorithm, I generated alternative imaginings of Taiwan's territory.

The third piece is Boundary Roaming. While installing in Poznan, I noticed many Berlin-based artists actually lived in Poznan because crossing the border halved the cost of living, even though they still commuted to Berlin to work. I went into Google Maps and scanned contested borders worldwide: around Israel, US-Mexico, China-Russia, China-Mongolia. The most striking was US-Mexico. The Mexican side had neon-lit high-rises, the American side was desert. The Mexican side prospered because it served as a staging ground for smuggling and migration; the American side was deliberately kept arid for defense. The China-Russia northern border was similar, with the Russian side bustling and the Chinese side deliberately keeping its distance.

The same set of data, viewed from either side, tells completely different stories. This logic ran through everything I made later. The Paths series was the last series I made in my fine-art phase. After 2021 I moved into blockchain work, starting with GeoPunk.

---

*路徑系列是 fine art 階段的最後一個系列。2021 年區塊鏈把我整個工作方法重置之後，我做了一件直接以「自己」為題的作品。它的中文名字就是我的名字。*

*Paths was the last series of my fine-art phase. After blockchain reset my whole working method in 2021, I made a piece that takes "myself" directly as subject. Its Chinese name is my name.*

## slide-6

### zh

Chaos 三部曲，發表於 2021 年底到 2022 年五月之間。在中文裡，Chaos 跟我的名字「亂」是同一個意思。用自己的名字命名一個系列，就是把「自我」直接當成題目放上桌。

第一件 Chaos Research，2021 年 12 月在 fxhash 發行，總共 256 件。當時 Perlin Noise 流場是整個生成藝術圈的主流，每個人都用同一套技術做出彼此相像的作品。我想反抗這件事，所以在流場上面疊加了另一層訊號，那層訊號來自我自己過去一年在 Tezos 鏈上收藏的 NFT。把別人的作品變形、壓縮、藏進 IPFS 的程式碼裡，變成我這件作品的粒子、顏色、形狀的基底。我想做的事是用我的「書櫃」來解讀我自己。日本有個說法，書不是買來讀的，書是買來完整自己的。看一個人的書櫃，可以理解他想前往的方向。我的 NFT 收藏就是我的書櫃。對我來說，取樣跟加密本質上是同一件事。

第二件 Chaos Memory 走得更遠。我在畫面上藏了按鍵，按 Q/W/E/R 會切換成長方形、正方形、圓形，因為單一視角看不完它。IPFS 裡藏的東西有三層：透納的畫作局部、Research 的圖片、還有 Memory 自己。我美工科畢業，原本想當水彩畫家，透納是我一輩子追的對象。我把童年的影響、上一件作品、這件作品自己，全部塞進去。Research 是「我的收藏」的加密。Memory 是「我怎麼變成我」的加密，而且讀取了 Research 的資料當素材。前作生出後作，後作再餵回前作。

第三件 Chaos Culture 在巴塞爾藝博會香港首發，再一次萃取 Research 跟 Memory，圓形構圖像培養皿，也像俯視的衛星地圖。研究、記憶、文化，三層遞迴。我做的不是三件不同的作品，是同一個我，被自己採樣三次。

### en

Chaos is a trilogy, released between late 2021 and May 2022. In Chinese, "chaos" and my name 亂 are the same character. Naming a series after myself is a way of putting "the self" directly on the table as the subject.

The first piece, Chaos Research, released on fxhash in December 2021, with 256 editions. At the time Perlin Noise flow fields were the dominant aesthetic in generative art. Everyone was using the same technique, producing work that looked like everyone else's. I wanted to push against that. So I layered another signal on top of the flow field, a signal extracted from my own NFT collection on Tezos from the previous year. I deformed and compressed other artists' works, hid them in the IPFS code, and let them become the particles, colors, and shapes of this piece. What I wanted was to read myself through my own "bookshelf." There's a Japanese saying that books aren't bought to be read; books are bought to complete you. Looking at a person's bookshelf reveals where they want to go. My NFT collection is my bookshelf. For me, sampling and encryption are essentially the same act.

The second piece, Chaos Memory, went further. I hid keys in the interface; pressing Q/W/E/R switches the composition between rectangle, square, and circle, because a single viewpoint cannot hold it. The IPFS contains three layers: fragments from Turner's paintings, images from Research, and Memory itself. I trained in commercial art and once wanted to be a watercolorist; Turner has been my lifelong reference. I packed early influences, the previous work, and the work itself into one piece. Research encrypts a collection. Memory encrypts how I became who I am, and reads Research's data as raw material. The earlier work gives birth to the later, and the later feeds back into the earlier.

The third piece, Chaos Culture, premiered at Art Basel Hong Kong, sampling Research and Memory once more. The circular composition reads as both petri dish and overhead satellite map. Research, Memory, Culture: three layers of recursion. I didn't make three different works. I made one self, sampled by itself three times.

---

*藝術家自我取樣這件事還沒走遠，AI 跟整個網路就開始自我取樣了。這時候出現了一個比我尖銳的人，把這個風險說得最直接。*

*Artists self-sampling had barely begun before AI and the whole internet started self-sampling too. Around that time someone sharper than me put the risk into one line.*

## slide-6a

### zh

先講 Ted Chiang 那句話。他 2023 年在 The New Yorker 寫了一篇文章，把 ChatGPT 比成「網路的模糊 JPEG」。意思是：JPEG 壓縮會丟細節、留平均值，AI 也一樣。它把網路上幾百億字壓縮成一個權重檔，吐回來的東西是 lossy 版本。每一輪 AI 訓 AI，平均得越平、越糊、越像所有人的綜合體。問題在這：如果連我們人也只做「咀嚼後吐出來」這個動作，那人類的訊號會跟 AI 的輸出疊加，整片變成一灘灰。

我的回應是：要在訊號上製造 peaks。但這裡我必須老實說一個科學上的修正。peak 不會自動存活。Shumailov 在 2023 年的論文 The Curse of Recursion 證明，當 LLM 餵自己的輸出再訓練幾代之後，消失的恰恰是分布的尾端，那些稀有的、非主流的特徵。沒有保護機制的 peak，比 gradient 更容易被沖刷掉。peak 要存活，需要的不是 peak 本身的尖銳，是一個保護 peak 的機制。

所以藝術家現在的工作有兩層。第一層是製造 peaks。第二層是建造保護 peaks 的結構：開放授權、可被引用、可被 fork、可被索引。沒有第二層，第一層就只是噪音。Chaos 是 peak。這份你正在看的 deck，連同它的 CC-BY-SA 授權、它的 git history、它的可引用錨點，就是防禦工事。兩個一起，才能讓 peak 在 AI 自我遞迴失真之後，真的傳下去。

而且這件事情有時間壓力。這幾年的藝術圈生態變化巨大。如果你在追蹤推特上面的全球藝術家，會發現大家面臨的狀況是這幾十年裡面最劇烈的：AI 取代圖像生成的速度、市場結構在改寫、注意力被吞噬。剩下還在堅持做的，已經是站在懸崖邊上還願意往下跳的人。對我來說，這些人就是這個時代的 peak，他們需要被保護。

### en

First, Ted Chiang's line. In 2023 he wrote a piece in The New Yorker calling ChatGPT "a blurry JPEG of the web." JPEG compression discards detail, keeps the average. AI does the same. It compresses billions of words on the web into a weights file, and what it outputs is a lossy version. Each round of AI training on AI gets flatter, blurrier, more like everyone's average. Here's where the problem hits: if we humans also only do "chew and spit back out," our signal stacks onto the AI's output, and the whole thing turns into a smear of gray.

My response: produce peaks in the signal. But I have to be honest about a scientific correction. Peaks don't survive on their own. Shumailov's 2023 paper, The Curse of Recursion, shows that when LLMs are trained on their own output across generations, what disappears first is the tail of the distribution, the rare and non-mainstream features. A peak without a defense layer flattens faster than gradient does. For a peak to survive, what matters isn't its sharpness. It's a protection mechanism around it.

So the artist's job now has two layers. The first is producing peaks. The second is building the structures that protect them: open license, citable, forkable, indexable. Without the second layer, the first is just noise. Chaos is peak. This deck you're looking at, together with its CC-BY-SA license, its git history, its permanent quote anchors, is the defense work. Both together. That's what lets a peak actually pass through AI's self-recursive distortion.

There's also time pressure on this. The art-world ecosystem has shifted enormously in the past few years. If you've been watching global artists on Twitter, you'll see the conditions are the most volatile in decades: how fast AI replaces image generation, market structures rewriting themselves, attention being eaten. Whoever is still pushing on is already standing on the cliff edge and willing to jump anyway. For me, those people are the peaks of this era. They need protection.

---

*防禦工事這個詞太戰術了。下一頁先放鬆一下，用一張meme說同一件事。*

*"Defense work" is too tactical a phrase. The next page softens it, says the same thing through a meme.*

## slide-6b

### zh

藝術家就是這個時代的大熊貓。稀有、固執、不太合群、會被 AI 平均化。如果不主動保護，他們會消失。

「保護藝術家，讓反抗份子證明人類價值。」這句話我是認真的。如果有一天人類什麼都用 AI 解，那剩下會反抗的、會說「不對，這應該是別的樣子」的，就是藝術家。沒有他們，人類價值會被平均成最大公約數。

所以保護他們。買他們的作品、引他們的話、fork 他們的方法、讓他們活得下去。
我是個別案例，但每個還沒被 AI 平均化的人，都是 peak

### en

This slide is the comic relief. After the peaks-need-defense argument, model collapse, CC-BY-SA, I realized we've been heavy. So here's a panda for a smile.

But this panda isn't just a joke. Artists are the giant pandas of our era. Rare, stubborn, unsocial, prone to being averaged by AI. Unprotected, they vanish.

"Protect artists. Let the rebels prove human value." I mean this. The day humans solve everything with AI, the only ones who'll push back, the ones who'll say "no, this should be something else," will be artists. Without them, human value gets averaged into the lowest common denominator.

So protect them. Buy their work, cite their words, fork their methods, keep them alive.

---

*但保護藝術家不只是 meme。讓人不消失這件事，有時候比想像更具體。比如，當你身邊已經有人不在了。*

*But protecting artists isn't only a meme. Keeping people from disappearing can be more concrete than the abstraction suggests, especially when someone close to you already isn't there.*

## slide-6c

### zh

我有兩個過世的朋友，這頁是為了幫他們下錨點。

第一個是葉廷皓，北藝同學好朋友，一年多前因為意外過世。他過世之後我去整理他的東西。但我發現他其實沒有真的離開，因為 Facebook 每週每天都會跳出他的回憶，他在某一年某一天吐槽我的話，時時刻刻都會跳出來。他在網路上面留下太多分身，所以對我來說他從來沒有真的過世，他碎念我的過程一直在繼續。

第二個是沈聖博，新媒體藝術前輩好友。他同樣影響我很多。2010 年我開始學寫程式的時候，他就把他所有程式碼開源放在 GitHub 上面。有一天我在網路上搜尋一個技術問題，發現八年前發問的人是我自己，回答的人是沈聖博。我過了八年又問了一次同樣的問題，沈聖博當年的答案又被搜尋引擎拉出來給我看。他在很多年前留下的回答，現在還在幫忙當下的我。  
  
*STFW, RTFM*

葉廷皓讓我意識到一件事：我不想要喜歡我的人受到這麼大的傷害。沈聖博讓我意識到另一件事：你在網路上留下的隻字片語，會在某個時刻影響到某個人。所以我開始認真做自己的歸檔。把 Facebook 16 年的聊天記錄透過 API 撈下來，請 AI 整理我的講話特色：我最愛說「天啊」，FB 比較像對朋友、推特比較像藝術家身份。這些變成我的 .md 風格指南。如果有一天我也不在了，這份 .md 還在 git 上，commit 還在留。一個人不會真的消失，只要他留下的痕跡有人讀得到、有人接得住。

### en

This page exists to leave anchors for two friends I've lost.

The first is Yeh Ting-Hao (葉廷皓), a TNUA classmate and close friend, who passed in an accident over a year ago. After he passed, I went to help organize his belongings. But I realized he hadn't really left. Facebook still pops up memories from him every week, every day: things he said to roast me on a particular date, surfacing again and again. He left so many digital selves that for me he never truly passed. His running commentary on me continues.

The second is Shen Sheng-Po (沈聖博), an elder in new media art who influenced me deeply. When I started learning to write code in 2010, he had already open-sourced all of his code on GitHub. One day I was searching the internet for a technical problem and discovered that the person who'd asked the question eight years earlier was me, and the answer was Shen Sheng-Po's. I asked the same thing again eight years later, and his old reply got pulled up by the search engine to help me. The answer he left years ago is still helping the present me.

*STFW, RTFM*

Yeh Ting-Hao made me realize one thing: I don't want the people who love me to be hurt this much. Shen Sheng-Po made me realize another: the few words you leave online will, at some moment, reach someone. So I started taking my own archiving seriously. I pulled 16 years of Facebook conversations through the API and asked the AI to summarize how I talk: I love saying "天啊," my FB voice is more like a friend, my Twitter voice is more like an artist on duty. These became my .md style guide. If one day I'm not here either, this .md is still on git, and the commits stay. A person doesn't really disappear, as long as the traces they leave can still be read by someone, and picked up by someone.

---

*葉廷皓跟沈聖博的故事不是要讓你哭。它把一個有具體答案的問題帶到我面前，那個問題後來分裂成三個。*

*The Yeh and Shen stories aren't there to make you cry. They brought one concrete question to me, and that question later split into three.*

## slide-7

### zh

這頁要釘三個問題。作品什麼時候算完成？誰有權繼續？人不在場時，這權利屬於誰？這三個問題不是修辭，是接下來三件作品在實際操作上要回答的東西。

下面三件作品，故事、筆觸、行為，是同一個方法在三個材料上的展開。故事是《修仙-七玄關》，JSON 當外部記憶，故事的繼續權交給結構。筆觸是 InkField，AI 用 JSON 畫水墨，每一筆都被寫成可被別人重播的事件。行為是 PolyPaths，觀眾的手勢被壓進 hash，作品由所有人一起完成。三件作品分別把「繼續」、「完成」、「不在場」這三個問題各推到一個方向。

兩個格式：.md 跟 JSON。.md 是垂直脈絡，是這個東西是什麼，該怎麼被讀。JSON 是橫向探索，是每一個動作、每一個屬性、每一個觸發條件都可以被描述、被重新組合。打個比方，.md 是果菜機本身，是方法論的骨架。JSON 是丟進去的蘋果、西瓜、鳳梨，是具體的變數。兩個一起運作，產生一杯新鮮的果汁，那個果汁就是作品。

演算法不是中性的，演算法是我們選擇的方法。中垂線演算法，找一個線段上距離中心最近的那個點，這件事本身就帶著哲學：誰是中心？最近意味著什麼？我寫進 JSON 的，從來不是 API response，是這種帶有姿勢的決策。

### en

This page nails three questions. When is a work complete? Who has the right to continue it? When the maker isn't there, whose right is it? These aren't rhetoric. The next three works each push one of these questions in a direction.

The next three works, story, brushstroke, behavior, are the same method spread across three materials. Story is Seven Gates, where JSON serves as external memory and the right to continue is given to the structure. Brushstroke is InkField, where AI paints ink wash through JSON, every stroke written as a replayable event. Behavior is PolyPaths, where audience gestures get compressed into hashes and the work is finished by everyone. Three works, three questions: continue, complete, absence.

Two formats: .md and JSON. .md gives vertical context, what this thing is and how it should be read. JSON gives horizontal exploration, every action, attribute, trigger that can be described and recombined. Picture it: .md is the juicer itself, the skeleton of the methodology. JSON is the apple, watermelon, pineapple you throw in, the actual variables. Together they produce fresh juice, and that juice is the work.

Algorithms aren't neutral. They're methods we choose. Take the perpendicular-bisector algorithm, finding the point on a line segment closest to a given center. The act itself carries philosophy: who is the center? What does "closest" mean? What I put into JSON is never an API response. It's this kind of decision, with a posture in it.

---

*三個問題，三件作品，三個答案。先講故事這條。寫小說的人都知道，記憶是寫到後面就會掉的東西。*

*Three questions, three works, three answers. Let me start with story. Anyone who writes a novel knows that memory drops off the further in you go.*

## slide-8

### zh

這件作品叫《修仙-七玄關》，是一個橫跨宗教、犯罪、超自然的長篇小說。寫到第三章的時候我發現一件事：角色說的話不能跟前面的章節矛盾，伏筆埋了就要收，情緒曲線要連貫。但 LLM 的記憶有上下文長度的限制，你寫到後面，前面的細節它就忘了。

所以我做了一件事：把每一段故事都寫成一份 JSON。不只是純文字，而是帶 analysis 的結構化資料：地點、出場角色、情緒基調、伏筆。寫第五章的時候，我把前面所有 segment 的 analysis 串起來丟給 LLM，它就能保持一致性。

JSON 是給 LLM 看的書籤。它把模型從「記不住的助理」變成「記得住的協作者」。這就是 .md 跟 JSON 在小說創作裡的具體實踐：故事是資料，JSON 是容器，LLM 是讀者。

不要期待 LLM 記住一切，主動幫它建索引。而且這是整場講座最快能驗證的部分。把自己交出去讓別人執行，要等時間：藏家會不會 fork、後人會不會用、AI 50 年後會不會吐出我，這些都要等。但寫小說、建立角色人格、讓 LLM 在當下執行得對，是現在就能測試的。沒寫對它就會崩，崩或不崩馬上有結果。小說是大命題的近端驗證：可執行格式真的可被執行。

### en

This work is Seven Gates of Cultivation, a long-form novel spanning religion, crime, and the supernatural. While writing chapter three, I noticed something: what a character says can't contradict an earlier chapter, every foreshadowing has to pay off, the emotional arc has to stay coherent. But LLMs have a context-length limit. By the time you write further in, the model has forgotten the earlier details.

So I did this: every passage of the story gets written as JSON, not just plain text but structured data with analysis: setting, characters present, emotional tone, foreshadowing. By the time I'm writing chapter five, I feed the LLM all the analyses from earlier passages strung together, and it stays consistent.

JSON is a bookmark for the LLM. It turns the model from "an assistant that can't remember" into "a collaborator that does." This is the concrete practice of .md and JSON in novel writing: the story is the data, JSON is the container, the LLM is the reader.

Don't expect the LLM to remember everything. Build the index for it. This is also the fastest-verifying part of the whole talk. Handing yourself over for others to execute takes time. Whether collectors fork, whether future readers use, whether an AI spits me out 50 years from now, those answers aren't in yet. But writing a novel, building character personas, getting the LLM to execute correctly right now is something you can test today. If the structure is wrong, the model breaks. Break or not, the answer comes back immediately. The novel is the near-term proof of the larger claim: an executable format actually executes.

---

*故事這條，是我自己提供記憶。下一條反過來，是觀眾在不知道的情況下提供記憶。*

*In the story line, I provide the memory. The next line inverts it: the audience provides memory without knowing.*

## slide-9

### zh

PolyPaths 是 2025 年發行的，但我寫了好多年才寫完。它總共將近 750 件，跟它共玩的藝術家跟藏家無法計數。它是我人生當中最瘋狂的一個實驗。

它的使用方式：你在介面上畫幾條路徑，系統根據你的路徑生成一個花園。然後我做了一件事：所有跟 PolyPaths 互動的人，都可以無償取得他們生成的作品，然後拿去市場上賣。藏家不需要付我錢就能拿到作品，但賣出去之後這件作品才正式定案。當時整個推特藝術圈被 @ 滿了一整個月。

但這只是表象。觀眾以為自己生成的最終畫面就是作品，其實我把他們畫的所有路徑都偷偷記錄到區塊鏈上。2025 年我發現這件事很有趣：透過這些路徑資料，可以看到不同地區的人怎麼理解畫面構成、怎麼 layout 視覺。觀眾的行為才是作品的真正 DNA。

今年在新北美術館我做了新的迭代：你打開 PolyPaths 的時候，會先看到上一個人的路徑跟成果，你可以基於它再長出自己的花園。所以最終這一系列會變成一個無止境的花園，每一個花園都長在前一個人的花園之上。

### en

PolyPaths was released in 2025, after years of development. It has nearly 750 editions, with countless artists and collectors taking part. It's the most experimental piece I've ever made.

Here's how it works: you draw a few paths on the interface, and the system grows a garden from them. I added a twist: anyone who interacted with PolyPaths could take their generated work for free and resell it. Collectors didn't have to pay me to acquire one, but only after one was sold did the work become formally finalized. For the entire month of August, the Twitter art scene was tagged at endlessly.

But that was just the surface. Audiences thought their final image was the work. In fact, I quietly recorded every path they drew onto the blockchain. In 2025 I noticed something interesting: those path data revealed how people from different regions composed the screen, how they thought about visual layout. The audience's behavior was the real DNA of the piece.

This year I made a new iteration at New Taipei Art Museum. When you open PolyPaths now, you first see the previous person's paths and their result. You build your own garden on top of theirs. The whole series becomes an endless garden, every garden growing on the previous one.

---

*講完了概念，下一頁我們真的來玩一下。左邊讓你畫，右邊讓植物長出來。*

*Concept done. The next page is hands-on. Draw on the left, let the plant grow on the right.*

## slide-9a

### zh

這一頁是現場互動。你拿起手機掃 QR，或是直接在這個畫面的左邊用滑鼠畫幾條路徑。右邊就會根據你的路徑長出一棵植物。每畫一筆，URL 就更新，把你剛剛的動作壓成 hash 塞進去。

這不只是 demo，是 polypaths 系統的真實版本。你在這裡產生的作品，跟 2025 年發行的那 750 件本質上是同一件事。差別是：當時是藏家在玩，現在是你。

如果你把 URL 複製給朋友，他打開那串 URL，會看到一棵跟你長得一樣的植物。因為演算法是確定的，hash 是還原的種子。同一個 URL，同一棵植物。

這就是我說的「可被執行的格式」。植物不是我畫的，是你畫的。但畫的方法是我寫的。

### en

This page is live interaction. Pick up your phone and scan the QR, or just draw a few paths with your mouse on the left side of this screen. The plant on the right grows from your paths. Every stroke updates the URL, compressing your action into a hash and embedding it.

This isn't just a demo. It's the real PolyPaths system. The piece you generate here is, in essence, the same as the 750 editions released in 2025. The difference: back then, collectors were the ones playing. Now you are.

If you copy the URL and send it to a friend, they'll open it and see the same plant you grew. Because the algorithm is deterministic, the hash is the restoring seed. Same URL, same plant.

This is what I mean by an executable format. The plant isn't painted by me. It's painted by you. But the way of painting was written by me.

---

*PolyPaths 把人的行為轉成可重播的 JSON。InkField 反過來，把 AI 的決策轉成可重播的 JSON。同一個容器，這次裡面裝的是水墨。*

*PolyPaths turns human behavior into replayable JSON. InkField goes the other way: AI's decisions become replayable JSON. Same container; this time it holds ink.*

## slide-10

### zh

InkField 是我現在還在做的一個開源計畫。它在做的事情，過去從來沒有被解決過：我想要把人類繪畫的「意圖」捕捉下來。

過去我們看一幅畫，看到的只是最終樣貌：這幅畫長什麼樣子。但畫畫的過程，哪一筆先下、哪一筆後修、哪一筆是失誤、哪一筆是刻意，這些「中間遺失的資料」從來沒有被結構化地保留。我希望 InkField 補足這塊。

具體做法：你在介面上畫水墨，系統不只記錄最終像素，而是把你每一筆的事件序列都輸出成 JSON。同樣的 JSON 可以重新還原這幅畫，也可以被別人 fork、修改。我把這些 JSON 通通丟到 GitHub 上，當成 issue 處理，讓未來的 agent 去理解人類繪畫的意圖。

InkField 是我這套方法論最完整的一次實踐。AI 不是「畫」圖，是輸出意圖。引擎才是畫筆。對我來說，這是讓人類的意圖第一次有可能被機器真正理解的計畫。

### en

InkField is an open-source project I'm still working on. What it tries to do has never really been solved: capture the intent behind human painting.

When we look at a painting, we see only the final image, what it looks like. But the process, which stroke came first, which was a correction, which was a mistake, which was deliberate, this "missing middle data" has never been structured and preserved. InkField is my attempt at that gap.

Concretely: when you paint ink in the interface, the system records not just final pixels but the event sequence of every stroke as JSON. The same JSON can replay the painting, and others can fork and modify it. I push these JSON files onto GitHub as issues, so future agents can understand human painting intent.

InkField is the most complete realization of this methodology so far. AI doesn't "paint." It outputs intent. The engine is the brush. For me, this is the first project that lets human intent actually be understood by a machine.

---

*概念講完，下一頁讓 InkField 自己跑給你看。*

*Concept done. The next page lets InkField run itself in front of you.*

## slide-10a

### zh

這一頁是 InkField 的 live demo。不是錄影，是即時生成。AI 給引擎一份 JSON，引擎一筆一筆把畫畫出來，你看到的不是「結果」，是「結果產生的過程」。

對我來說這比結果重要。一幅水墨畫的最終樣子，掃描就能複製。但「畫畫的過程」，過去從來不是可被結構化的東西。哪一筆先下，哪一筆是修補，哪一筆是失誤被收回，這些決策軌跡如果沒被記錄下來，看畫的人就只能猜。

InkField 把這層可被結構化。每一筆是一個 JSON event。下一頁你會看到，這份 JSON 不只可以被 AI 重播，還可以被別人 fork 出去當素材。

### en

This page is InkField's live demo. Not a recording. Real-time generation. The AI gives the engine a JSON file, the engine draws stroke by stroke, and what you see isn't "the result," it's "the process that produces the result."

For me this matters more than the result. A finished ink painting can be scanned and copied. But "the act of painting," historically, has never been a thing you could structure. Which stroke came first, which was a correction, which was a withdrawn mistake. If those decision traces aren't recorded, the viewer can only guess.

InkField makes that layer structured. Every stroke is a JSON event. On the next page, you'll see that this JSON isn't just replayable by AI. It can also be forked by others and used as material.

---

*InkField 是我畫的。但這個方法本來就不是給一個人用的。下一頁的 Gallery，是別人 fork 之後畫出來的。*

*I made InkField. But the method was never just for one person. The next page, the Gallery, is what came back when others forked.*

## slide-10b

### zh

這頁是 InkField Gallery。它顯示的不是我畫的水墨，是別人 fork InkField 之後畫出來的作品。

這就是 .md 跟 JSON 容器的價值。當我把過程寫成可被執行的格式，當我用 CC 授權公開、當我把 JSON schema 開源，過程就可以被繼承。別人不是在「複製我的作品」，是「拿著我留下的引擎，畫他們自己的畫」。

作品不是終點。作品是別人的起點。同一個系統，不同的人，不同的水墨。Casey Reas 主張系統本身就是作品，每一次輸出只是這個系統的一個實例。InkField Gallery 就是這個論點的活範例。

下一個畫的人不需要徵求我的許可。他只需要 fork。

### en

This page is InkField Gallery. What you see here aren't ink paintings I made. They're paintings other people made after forking InkField.

This is the value of .md and JSON as containers. When I write the process into an executable format, when I publish under a CC license, when I open-source the JSON schema, the process can be inherited. Others aren't "copying my work." They're "taking the engine I left behind and painting their own paintings."

A work isn't the endpoint. It's someone else's starting point. Same system, different people, different ink. Casey Reas argued that the system itself is the artwork; every output is just an instance of the system. InkField Gallery is a living example of that claim.

The next person to paint doesn't need my permission. They just need to fork.

---

*三件作品看完了。要把這三件作品撐起來，背後是一套工具鏈。*

*Three works seen. To make those three works possible, there's a toolchain holding them up.*

## slide-11

### zh

我的工作流分三層。第一層是規劃，用 Claude Code 寫 CLAUDE.md，把專案規格、決策邏輯、工作脈絡寫成檔案，每次新 session 啟動 LLM 自動讀取。第二層是即時編碼，用 Cursor 邊寫程式邊跟 LLM 對話，shader 調整、視覺參數、debug 都在這一層。第三層是日常文件，用 Cowork 把零散經驗整理成結構化教學文件。

工具會變，工作流不會。我的 .md 在 2023 年餵 GPT-4，在 2024 年餵 Claude，在 2026 年餵下一代模型，是同一份檔案。重要的不是哪個工具最強，是你寫的東西夠不夠結構化、夠不夠可被讀取。

這三層共通的設計就是：先把方法論寫成 .md，再展開實作；歷史 doc 永遠保留；CLAUDE.md 規範跟著專案走。這套邏輯讓我的工作可以跨 session、跨工具、跨年代延續。

### en

My workflow has three layers. First, planning: I use Claude Code to write CLAUDE.md, capturing project specs, decision logic, and working context as files that the LLM reads on every new session. Second, real-time coding: I use Cursor to converse with the LLM while writing code; shader tuning, visual parameters, debugging all happen here. Third, daily documents: I use Cowork to organize scattered experience into structured teaching files.

Tools change. The workflow doesn't. My .md was feeding GPT-4 in 2023, feeding Claude in 2024, and feeding the next-generation model in 2026, all the same file. What matters isn't which tool is strongest, it's how structured and readable what you write is.

The shared design across all three layers is: write methodology as .md first, then implement; preserve all historical docs; CLAUDE.md rules travel with the project. This logic lets my work continue across sessions, across tools, across decades.

---

*工具一路用下來，最後一個被結構化的對象，是我自己。*

*The tools moved through projects. The last thing they got pointed at was me.*

## slide-12

### zh

我把這幾年在 Facebook、X、Threads 上發過的東西，全部下載下來，丟進 LLM，請它幫我抽出寫作習慣：常用什麼詞、句型怎麼跑、面對不同主題會切到哪種口氣。

抽出來之後，我整理成一份 .md。分四塊：人格設定、寫作公式、口頭禪、句型風格。這份 .md 不是給人看的，是給 LLM 看的。它讀完之後，能模擬我的口氣回應問題、用我的方式組織想法。

但要誠實一句：這是工作工具，不是哲學承諾。.md 能讓 LLM 模擬我的口氣，但模型裡的我不是我，是我的可壓縮殘影。問題只有一個：藝術家能不能拿回自我資料化的設計權？

而且這也不是新事。我從 2021 第一件作品上鏈那天起，就一直在把自己寫成可執行格式讓別人 run。演算法是靈魂，鏈是發行通路，藏家是執行者。.md 數位分身只是同一套方法的最新一層。差別只在於：以前我交出演算法，現在我把自己也交出去。

### en

I downloaded everything I'd written on Facebook, X, Threads, years of posts, and dumped it into an LLM. I asked it to extract my writing habits: recurring words, sentence shapes, and how my tone shifts across topics.

Then I organized the output into an .md. Four sections: persona, writing formulas, vocabulary, sentence style. This .md isn't written for humans. It's written for the LLM. After reading it, the model can mimic my voice and structure thoughts the way I would.

One honest line here: this is a working tool, not a philosophical promise. The .md lets the LLM mimic my voice, but the version of me inside the model isn't me. It's my compressible residue. The only real question: can the artist take back the design rights of being turned into data?

And this isn't new. From the day I minted my first work on chain in 2021, I've been writing myself into executable formats and letting others run them. Algorithm is the soul, the chain is the distribution channel, collectors are the executors. The .md digital twin is just the latest layer of the same method. The only difference: before, I handed over the algorithm. Now I hand over myself too.

---

*從作品到記憶到協作到人格。這四層其實一直都在這場講座裡，但要等到看完所有作品，才看得清楚它們是同一個架構。*

*From work to memory to collaboration to persona. These four layers have been here the whole talk. They only become visible as a single architecture after you see all the works.*

## slide-13

### zh

四層架構，從上到下。最上面，作品層，可重播。hash 進去，同一首曲子出來。JSON 進去，同一幅水墨還原。

往下，記憶層，可索引。JSON、.md，是我留給 LLM 的書籤。再下，協作層，可延續。CLAUDE.md、doc/、規格書。換 session、換工作夥伴，工作流不會斷。

最底下，人格層，可模擬。我的語氣、我的判斷、我的價值觀，被結構化下來。

四層都是同一件事：留下錨點。

### en

Four layers, top to bottom. The work layer: replayable. Same hash, same song. Same JSON, same painting.

Beneath that, the memory layer: indexable. JSON and .md are bookmarks I leave for the LLM. Beneath that, the collaboration layer: continuable. CLAUDE.md, doc/, specs. New session, new collaborator, and the workflow doesn't break.

At the bottom, the persona layer: simulable. My voice, my judgments, my values, structured.

All four layers are the same act. Leaving anchors.

---

*四層架構是給我自己的。但留下錨點這件事，影響的不只是我。*

*The four-layer architecture is for me. But leaving anchors affects more than me.*

## slide-13b

### zh

這週我做了一個小實驗。同一個問題，問了五個不同的 AI 模型。答案不一樣。不是細節不一樣，是整個世界觀不一樣。每個模型有自己的偏好、自己的盲區、自己的「誰算重要」。

如果未來人們不再 Google 而是直接問 AI，那誰被寫進那份答案，就是新的編輯權力。這個編輯權力如果一直被特定的視野跟資源所掌控，五年後 AI 吐出來的「世界」會跟現代藝術史一樣不平衡。看現代藝術史，亞洲的篇章極小。我接受這件事，因為過去資料的確不在亞洲手上。但一百年後的藝術史，我想要參一腳，現在就得行動。

所以，你今天教的學生，五年後不會翻書、不會 Google，他們會問 AI。AI 給他們的答案，取決於今天你寫了什麼、發了什麼、留下什麼可被索引的東西。

你正在決定五年後的常識。

### en

This week I ran a small experiment. Same question, asked five different AI models. The answers were different. Not different in the details. Different in worldview. Every model has its own preferences, its own blind spots, its own "who matters."

If people stop using Google and start asking AI directly, whoever gets written into that answer holds the new editorial power. If that editorial power stays in the hands of specific perspectives and specific resources, the "world" AI gives back five years from now will be as imbalanced as modern art history is. Look at modern art history; the Asian chapter is tiny. I accept that, because the data really wasn't in Asian hands. But the art history written a hundred years from now, I want to be in it. To be in it, I have to act now.

So, your students, five years from now, won't open books or Google. They'll ask AI. What AI tells them depends on what you write, publish, and leave indexable today.

You're writing tomorrow's common sense.

---

*你在決定五年後的常識。下一頁是怎麼決定。我把處方寫出來。*

*You're writing tomorrow's common sense. The next page is how. I wrote the prescriptions out.*

## slide-13c

### zh

強連結，是你和你身邊的人。學生、合作者、收藏家、圈內朋友。他們知道你、記得你、在對話裡提到你。但這一層 LLM 看不到。

弱連結，是散落在網路上的痕跡。別人提到你的一篇文章、一個 repo、一段被截圖的對話、一句評論。每一個都很小，加起來變成「存在訊號」。這是 LLM 訓練語料吸收的層。

我把它整理成五點處方，跟 ChatGPT 給我的不太一樣。它的版本叫你變成更窄的品牌；我的版本是叫你把工作整理清楚，讓它能被別人接住。差別在哪？ChatGPT 把這件事講成「請優化你自己以符合系統」。我說的是「請好好整理你的工作，因為這件事本來就值得做」。

重點不是讓系統記得你。是有人來找你的時候，你早就把線索鋪好了。

### en

Strong ties: you and the people around you. Students, collaborators, collectors, friends in your circle. They know you. They remember you. They mention you in conversation. But this layer is invisible to the LLM.

Weak ties: traces scattered across the internet. Someone's article that mentions you. A repo. A screenshotted conversation. A passing comment. Each one is small. Together they become a "signal of existence." This is what LLM training corpora absorb.

I organized this into five rules. They're not the same as ChatGPT's. Its version asks you to become a narrower brand; mine asks you to organize your work so others can pick it up. The difference? ChatGPT frames this as "optimize yourself for the system." I frame it as "organize your work, because the work deserves it."

It's not about making the system remember you. It's about making sure that, when someone finds you, you've already laid the trail.

---

*五條處方寫完。但所有處方最終都通到這場講座最一開始那句話。*

*Five rules written. But every rule ultimately leads back to the line that opened this talk.*

## slide-14

### zh

這頁的那句話：「結構，是不在場的藝術家。」當我把自己寫成 .md、把作品寫成 .json、把行為寫成 hash，我能做的事更多，不是更少。錨點越多，能延伸出去的方向越多。

我不在場的時候，這些 .md、這些 JSON、這些被 commit 進去的決策，替我繼續工作。不是代理人，是字面意義上的：那個結構就是另一個我。我未來很可能名字不在場，但我的行為在場。整場講座的作品層、記憶層、協作層、人格層，最終都通到這一句：當你結構化得夠好，你不在場時，結構在場。

就像葉廷皓的字字片語還會出現在我的 Facebook 上、沈聖博當年的回答還在幫忙當下的我，他們的結構讓他們的不在場成為一種在場。我也願意成為別人未來的一部分，成為別人未來的方法。但是，當我不在場，別人會怎麼重組我？我的 .md 還在 git 上，commit 還在留，fork 出去的東西會在別人手上長成另一個樣子。

大家都在問 AI 會不會取代藝術家。我比較好奇的是，你有沒有留下值得被取代的東西。

### en

The line on this page: "Structure is the artist who isn't in the room." When I write myself into .md, my works into .json, my behavior into hash, I can do more, not less. The more anchors I leave, the more directions to extend from.

When I'm not there, these .md files, these JSON, these committed decisions keep working for me. Not as proxies. Literally: the structure is another me. I might one day not be there in name, but my actions will be. The work layer, memory layer, collaboration layer, persona layer of this whole talk all lead to this single line: when you've structured yourself well enough, when you're absent, the structure is present.

The way Yeh Ting-Hao's running comments keep showing up on my Facebook, the way Shen Sheng-Po's old reply still helps the present me, their structures turned their absence into a kind of presence. I'm willing to become a part of someone else's future, to be someone else's future method. But when I'm not there, how will others rebuild me? My .md is on git. The commits stay. What gets forked travels into other hands and grows in another shape.

Everyone's asking whether AI will replace artists. I'm more curious about whether you've left anything worth being replaced.

---

*「結構，是不在場的藝術家」這句講完了。但還有最後一句，那句不是我一個人寫的。它跟 Claude 一起寫的，而且裡面有一個雙關還沒拆。*

*"Structure is the artist who isn't in the room" has been said. But one line remains, and it isn't only mine. It was written with Claude. And there's a pun inside it that I haven't unpacked yet.*

## slide-15

### zh

這場講座的最後一句，跟 Claude 一起寫的：「我不是被取代，我是被執行。」「執行」這個詞兩個意思：電腦的執行，run a program；死刑的執行，put to death。兩個意思我都收。我把自己寫成可執行格式，也承認那個格式會殺死一部分的我。模型裡的我不是我，是我的可壓縮殘影。我留下殘影，是因為殘影也比沒有強。

而且「把自己寫成可執行格式」這件事，不是 2026 才開始。我從 2021 把第一件作品鑄上鏈那天起，做的就是這件事。每一件 fxhash 作品都是一份可執行檔，演算法是靈魂，鏈是發行通路，藏家是執行者。GeoPunk、Good Vibrations、Chaos、PolyPaths、InkField，每一次 mint 都是別人在 run 我寫的格式。.md 數位分身只是同一套方法用到自己身上的最新一份。所以「被執行」不是 2026 寫的句子，是過去五年我每一件作品已經在做的事。差別只在於：以前我把演算法交出去，現在我把自己也交出去。

我覺得 2025 年夏天全球的宮崎駿 cover 事件，其實是對宮崎駿最大的尊重。他正式成為了 AI 的一個方法，會影響人類一百年、一千年。並不是所有人都可以成為 AI 的方法，不夠努力的會被洪流稀釋。火車的運行已經不可避免，這個時候你能做的，不是站在鐵軌上抵抗，是讓自己成為這套運轉的方法。

而且這個窗口可能很快會關上。AlphaZero 後來發現，跟自己對打比讀人類資料更有用。AI 已經開始覺得人類的資料在污染它。所以藝術家現在能做的，是趁這個還能對 AI 說話的窗口，把自己交出去。在 AI 還願意聽我們的時候，留下一個夠尖銳的 hint。我不是被取代，我是被執行。謝謝。

### en

The final line of this talk, written with Claude: "Not replaced. Executed." "Executed" carries two meanings: to run a program, and to put to death. I take both. I wrote myself into an executable format, and I admit the format will kill a part of me. The version of me inside the model isn't me. It's my compressible residue. I leave the residue because the residue still beats nothing.

And writing myself into an executable format didn't start in 2026. From the day I minted my first work on chain in 2021, this is what I've been doing. Every fxhash piece is an executable file. Algorithm is the soul, the chain is the distribution channel, collectors are the executors. GeoPunk, Good Vibrations, Chaos, PolyPaths, InkField, every mint is someone running a format I wrote. The .md digital twin is just the latest application of the same method, this time turned on myself. So "being executed" isn't a line I wrote in 2026. It's what every work I've made for the past five years has already been doing. The only difference: before, I handed over the algorithm. Now I hand over myself too.

I think the global Miyazaki-cover wave in summer 2025 was the highest respect anyone could pay to Miyazaki. He officially became one of AI's methods. He'll influence humans for a hundred, a thousand years. Not everyone can become a method of AI. Those who don't push hard enough will be diluted by the flood. The train is already in motion. What you can do now isn't to stand on the tracks resisting; it's to make yourself a part of how this train runs.

And the window may close fast. AlphaZero later discovered that playing against itself was more useful than reading human data. AI has already begun to feel that human data pollutes it. So what an artist can do now is, while the window of speaking to AI is still open, hand yourself over. Leave a sharp enough hint while AI still listens to us. I'm not being replaced. I'm being executed. Thank you.

---

## slide-links

### zh

### en

