Courage! Strength! Determination!
When was the last time you’ve been to the beach?
Whenever I’m hunched over convincing myself these keystrokes mean something so I imagine those who, beer in hand, saunter along the sunset. Volleyball rolling past.
Why am I here fretting about authentication when I could sleep under the Sun?
Well, that’s obvious.
To me, the unhappiest people in the world are those in the watering places, the international watering places like the south coast of France and Newport and Palm Springs and Palm Beach; going to parties every night, playing golf every afternoon, then bridge. Drinking too much, talking too much, thinking too little. Retired. No purpose.
― Richard Nixon
Is it true? Well, it seems to be the case thus far…
The above preamble was entirely irrelevant for this chapter… or is it? It’s what I think about as I work on this: maybe there will be a purpose though at times I feel as though I am bending cages to a more comfortable fit.
Anyway, let’s get the site entries working. But before we do that, I thought I’d divulge on storage strategy.
While revisiting the file :save
controller action I began to think about how inefficient it’d be to serve from B2 all the time, especially as one works on the site. The plan was to always get a CDN thrown in the mix, but a local cache would prove fruitful, especially for pages that require authorization. Luckily enough for a local cache we can do that with in-memory structures — which, I think, will be required for the editor experience anyway.
Especially if we want multiplayer editing.
You may be wondering, “why are we avoiding using the filesystem on the server?”
Great question. It’s about making it easy to migrate the server later. When we deploy this, well, we’ll probably deploy it on a small box with only ~40gigs of storage, a good portion gone. We can either deal with the caching issues upfront or later; may as well avoid the whole headache of having to manage server storage.
So while we work on finishing out these site entries, we may have to rethink how files are handled. It’s been awhile since I’ve dealt with how the websocket libraries work within Phoenix, but we’ll figure it out. There are a few vague ideas. The way it works will be more clear if we aim for multiplayer editing first. So, just as a mental note, we need to make that work before we figure out our file caching strategy.
Back to the site entries.
Site entries wrap up
Part of the reason I delayed finishing this to another chapter is because the editing story of a site entry requires, in my humble opinion, a nice editor. Especially if it’s markdown, though maybe that’s dumb. There’s something uncanny about having full access to all the controls you’d get from Microsoft Word. One could get this done with a simple textarea
and move on, but maybe rich texting editing is cool. It’s a nice experience on GitHub or Discourse, for example. So I was vaguely planning to do that this chapter.
But maybe getting everything wired up with textarea
first would be smart.
Anyway, back to doing things.
Well, let’s test the function we made in the last chapter. We can boot-up the editor for test.app.localhost
and save something on index.html
. If all goes well, we should see something in the database.
Time passing for awhile
Let’s see… what to do, what to do.
Well, before walking away from last chapter I ended up creating some of those SiteEntry
items. If we query the database we can check it out:
psql -U postgres -d project_dev
project_dev=# select * from site_entries order by inserted_at desc limit 1;
-[ RECORD 1 ]------------------------------
id | 4
title | null
emoji_code | null
post | null
site_id | 1
Notice how we left the title, emoji, and actual post as all NULL
on creation? That was intentional.
You can have a generic update… or you can fill in the information to make it more meaningful, upgrade it to a milestone.
The question is, what should the generic placeholders be?
The generic title could be “<site_name> updated”. The post doesn’t have to have anything, other than No comment.
Then as for the emoji, I’m not sure. It could be a document. Actually, instead of an emoji it can be a generic icon from the icons we’ve been using, like a document. This helps distinguish between those that were edited and those that weren’t.
Lastly, we need to make editing something you do inline. Which shouldn’t be too bad, I don’t think.
Hour later
I originally planned to use the ProseMirror wrapper “tiptap” in order to create this editing experience, but I found something more barebones to use instead.
More research
This is the first time I felt resistance in implementing something on this toy idea. I think part of it is doubt.
I started this project knowing no one necessarily needs a new website host, especially a social one. It’s a doublecross of values, in a way, because one would probably be better off not interacting with socials in any capacity. But at the same time, there’s no sanctuary for the luddite. Other than silence.
Anyway, let’s begin just getting this done. I funnily enough got “stuck” when it came to running a simple package install…
npm i @handlewithcare/react-prosemirror
Great, let’s hop on over to the SiteProfile
.
Hour and a half of reading documentation
Sometimes I feel a little in-over-my-head — or, more aptly, wondering why am I signing up to use both ProseMirror as well as CodeMirror but alas, I wanted to make site entries more interesting.
Is it really interesting though?
Well, part of the idea here is that I want the “feed” to capture the common motif of “Updates” that I see on some websites. Not just capture, but record it. Make a record. Your site profile should actually reflect the history of your site.
One of the biggest regrets I have about my websites is that I deleted all of the version control. Amongst those that still had version control, I wasn’t all that studious about “committing” each post.
How cool it would be to be able to jump back in time, on the platform, with your website? This is why I feel like having a little “blurb” next to it, with a title, and emoji for flair, is important. Because it’s about making a story. Reaching milestones. Making an announcement to look back on.
It makes it for a more interesting story… maybe. Even if it’s all a delusion.
So we have to do this. Even if it seems like such a small irrelevant feature.
So I added the ProseMirror entry and then spent thirty minutes or more figuring out how to best have edit/non-edit mode, only to discover that just making the editor readonly
is currently the best path forward.
Obviously there are some questions to ask when it comes to performance… if there are 100 instances of the editor through comments or whatever… but let’s put that off.
Hour of fiddling
Day break, weekend cadence
I think a secret to building things is extreme boredom. I am so bored with everything that I can only find writing or building things mildly satisfying. Reading too, which I will update the book log later.
I’ve not been necessarily bored in the past couple of days, so I haven’t found motivation to get over this last hurdle before the MVP is finished.
The answer is to obviously use the “tiptap” editor that comes batteries included and has pretty okay documentation. And the only reason why I haven’t done that, yet, is because I have another project idea (which I will write about on here, eventually) which would probably require some familiarity with how ProseMirror works internally.
So I could learn how it works in this usecase and leverage that with the next project, but I don’t find how I’m using it in today’s project necessarily interesting. Why do all of this reading and research on the inner model of this package when the core value of this project is a good code editor?
So maybe I just bend the knee and use the convenient wrapper so I don’t have to think about this.
Or, we wait until boredom overrides the inanity of implementing this editor. That’s the crossroads.
There’s only like two extensions I want to add. The first is @mentions and the second is maybe emojis. Both of those come “out-of-the-box” with the “tiptap” wrapper.
Day break
I got fixated on the name of the website. I was pretty happy with what I originally came up with, but then I started to wonder if there’s something better. I’m not sure.
It’s also a little confrontational, in a way, because it reveals your patterns. What do you feel like you could “back” without being “fake”? Hard to say, and then you think well, it’s all a business transaction anyway, or I can fake everything anyway, but sometimes you could hope it’s more than that, or that you don’t have to. Have to fake it, but maybe it’s better than being real.
If I was out to look for money there are hundreds of AI slop ideas waiting to be implemented. I started this out of a vague hope it wouldn’t feel like work-work, instead of just work. There’s a difference between the two.
20 minutes
So I reorganized more of the frontend code. That’s always a good place to start I suppose.
Few days later
Maybe it’d be smart to start marking the days worked on this as motivation.
It has been a few days since last working on this.
Sometimes doubts arise about the utility or use, but I think it’s also about whether one could “believe” in it. The proselytizing tendency is difficult to escape… in the blood.
Anyway, for a few days now I’ve been working on other stuff to avoid dealing with prosemirror.
Frankly the whole reason this started is because it’s over, in a way. This is a “loveletter” to “this section” of life to which one says goodbye to, maybe. The same as closing an imageboard.
Walk
And after going on a walk today, I have to admit that there isn’t necessarily a “grand” reason behind making this.
Here’s an article I read today, about fulfillment.
It hits a lot of great points: the reason I started the whole “small business” route is exactly because of autonomy.
I don’t really think about “mastery” all that much. Are there functions of a small business someone else would be better at? Absolutely, but if you also push yourself you can become pretty good too.
Few hours of musing
I was thinking about the likes/comments and tried to imagine what this would look like without them. After all, the main reason why anyone wants to use a website is whether it has addictive qualities/novelty/engagement. What would it mean if you took that away, only a feed remains?
I suppose it turns more into an internal RSS.
It’d also make it possibly more immersive, maybe? Because you can implement your own comments system, or “claps” system like medium. I think that’d be fun.
But hey, if it sucks we can always add Official™ likes (reactions, probably) and comments.
But yeah. I think that’s why I’m building this. To try the alternative out.
Hours of fiddling
I realized I actually enjoy messing with CSS, strangely enough. Usually it’s a bother, but it’s a bother because there’s a deadline. And maybe you feel dumb when you could use Figma instead, maybe. I don’t know. It’s nice though.
Also, I realized that the best way to store the contents of the post would be with JSON. That’s how you can add notifications for mentions or other things. Allows us to render it too, without instantiating prosemirror.
Few minutes of browsing around
So I decided to bend the knee. Reading prosemirror code feels so verbose. I don’t care anymore. I guess that’s frontend life: just bend the knee.
Look, if the central proposition of this app was something like Substack, where I had to master the editor experience, than yeah. Of course. But this isn’t it.
Let’s install tiptap.
npm install @tiptap/react@beta @tiptap/pm@beta @tiptap/starter-kit@beta
Read more docs
Okay, finally we’re cooking. It’s all wired up and markdown mode works out of the box.
One question is whether to remove the headers or not. Well, we can just remove them real quick.
It’s been so long since a code snippet was posted. Not sure where to begin. That’s okay. Took a good chunk of days to finally make the profile and entry editor feel right, so what’s left?
Well, toggling between edit and save, as well as update the database, and then statically rendering on load.
Let’s start first with the edit and save.
To do that, we need to hold the editor’s contents as JSON while it’s being edited.
export function EntryEditor() {
const [content, setContent] = useState(initialDoc)
const editor = useEditor({
immediatelyRender: false,
extensions: [StarterKit.configure({ heading: false })],
content,
onUpdate: ({ editor }) => {
const json = editor.getJSON()
setContent(json)
},
})
// ...
}
Hour of moving files around
I don’t think there’s much point in explaining how a library works. Let’s just skip to the backend controller.
First, we’re going to store this stuff as JSON. This makes it so we can do server side rendering.
So let’s update the site_entries:
defmodule Project.Repo.Migrations.CreateSiteEntries do
use Ecto.Migration
def change do
create table(:site_entries) do
# ...
add :content, :jsonb, null: false, default: "{}"
# ...
end
end
end
I then drop the current database and rerun the migrations with this single command:
mix ecto.reset
After creating another test site we can now wire up the rest of it.
Quick side note
So far it’s somewhat cool to use a JS framework as the frontend. But this server-client mismatch has been a source of a lot of headaches…
Before actually saving and loading the JSON from the backend, I ended up making sure we could render it and edit locally, at least.
The thing about rendering user-generated HTML is that if you aren’t careful, you can compromise your site.
The only sanitization package is DOMPurify, which uses JSDOM under the hood, a package meant purely for the server.
So you have to do something like this in order to make this work:
// assets/js/lib/sanitize.ts
import createDOMPurify, { type DOMPurify } from "dompurify"
let DOMPurify: DOMPurify
if (typeof window !== "undefined") {
DOMPurify = createDOMPurify(window)
} else {
const { JSDOM } = require("jsdom")
const window = new JSDOM("").window
DOMPurify = createDOMPurify(window)
}
export function sanitizeHTML(html: string): string {
return DOMPurify.sanitize(html)
}
Then we need to mark it as external to load it in dynamically:
config :esbuild,
version: "0.25.5",
ssr: [
args:
~w(js/ssr.jsx --bundle --platform=node --external:jsdom --outdir=../priv --format=cjs),
cd: Path.expand("../assets", __DIR__),
env: %{"NODE_PATH" => Path.expand("../deps", __DIR__)}
]
Finally, we need to separately install JSDOM into the priv
directory (which ships with the deployed application).
cd priv/
npm i jsdom
Now whenever we deploy, we’ll have to install JSDOM into this directory. Thus, when the NodeJS worker runs ssr.js
it can dynamically resolve jsdom
within the sanitize.ts
file.
This took more time than desired, but now the benefit is that when someone writes something, it can be rendered to HTML.
// assets/js/components/Timeline.tsx
// ...
const html = renderToHTMLString({
content: initialDoc,
extensions: [StarterKit.configure({ heading: false })],
})
const pureHTML = sanitizeHTML(html)
// ...
<div className="prose">
{isEditing ? (
<>
<SimpleBubbleMenu editor={editor} />
<EditorContent editor={editor} />
</>
) : (
<div dangerouslySetInnerHTML={{ __html: pureHTML }} />
)}
</div>
// ...
We can explore the EditorContent
and bubble menu and etc later.
But now that’s wired up, let’s get back to wiring up the endpoint for saving.
Because that’s what life is about. You keep pressing forward. You keep pursuing and you either give up to live as a shell or push forward until there’s nothing left.
More hiatus
Anyway, now we can update and delete site entries.
# lib/project_web/controllers/entry_controller.ex
defmodule ProjectWeb.EntryController do
use ProjectWeb, :controller
alias Project.Hosting
def update(conn, %{"entry_id" => entry_id} = params) do
scope = conn.assigns.current_scope
{:ok, _entry} = Hosting.update_site_entry(scope, entry_id, params)
conn
|> put_flash(:info, "Entry updated!")
|> redirect(to: ~p"/sites/#{scope.site.subdomain}")
end
def delete(conn, %{"entry_id" => entry_id} = _params) do
scope = conn.assigns.current_scope
{:ok, _entry} = Hosting.delete_site_entry(scope, entry_id)
conn
|> put_flash(:info, "Entry deleted.")
|> redirect(to: ~p"/sites/#{scope.site.subdomain}")
end
end
The above only required two new functions under hosting.ex
.
# lib/project/hosting.ex
def update_site_entry(%Scope{} = scope, entry_id, attrs) do
with entry <- Repo.get_by(SiteEntry, id: entry_id),
true <- scope.site.id == get_in(entry.site_id),
changeset <- SiteEntry.changeset(entry, attrs, scope),
{:ok, entry} <- Repo.update(changeset) do
{:ok, entry}
end
end
def delete_site_entry(%Scope{} = scope, entry_id) do
with entry <- Repo.get_by(SiteEntry, id: entry_id),
true <- scope.site.id == get_in(entry.site_id),
{:ok, entry} <- Repo.delete(entry) do
{:ok, entry}
end
end
Still not too sure on the authorization story yet, but it’s alright. We can just continue to check the scope at this level.
Great!
That was a long blocker. What’s next?
Well, what I’m going to try to do is recreate siqu.neocities.org
through this site editor. After that’s done then we can look at how multiplayer shall work, multiple users, I don’t know, we can make an economy and status games.
Anyway, let’s begin recreating siqu.neocities.org
.
I mean we’re still missing the whole “multiple users” sort of thing, a main dashboard portal, that sort of thing, but whatever. I would show some screenshots of the entry stuff working but I would prefer to show it with site snapshots. Right now it’s just colorful divs.
So I think the next cool thing to aim for is polishing the editing experience.
It should be of no surprise that siqu.neocities.org
is updated through the cli. I use the “astro.build” stuff even though I’m not sure if I’d recommend it.
But yeah, the objective here is to be able to recreate siqu.neocities.org
without requiring any build step on a local computer.
Pretty big of a goal, but there’s a few tricks we can do.
Once that goal is achieved, then we can look at actually making the website have multiple users and clean up the authorization stuff.
Recreating siqu.neocities.org: Pars Prima
So why would one use this website builder over other website builders?
Disclaimer: there are pros and cons to everyone, choose which works for you!
Here is what I’m hoping to provide on this one:
- Multiplayer experience
- More robust history (snapshots) and posting (mentions, emojis, titles, notifications (because I need to deliver dopamine for this to be even remotely “successful”))
- (???) wildcard
What do you think the wildcard will be? What possibly could it be? What in the world is this wildcard?
Stay tuned.
For now, let’s just get the basics of editing down.
We’ll wrap up this chapter hopefully after the scaffold of the siqu.neocities.org
is created.
Hopping on over to the editor, let’s get javascript working.
But before that, when loading test.app.localhost:4000
so returns a “Not Found”. Even though there’s an index.html
.
That’s because when we try to load a file, we’re only literally looking for ”/”. We need to “resolve” that to index.html
.
# lib/project/hosting.ex
def get_site_file_by_path(%Scope{} = scope, path) do
path =
case Path.extname(path) do
"" -> Path.join(path, "index.html")
_ -> path
end
Repo.get_by(SiteFile, site_id: scope.site.id, path: path)
end
The Path.extname
looks if what we’re loading ends with any extension, e.g. test.js
. If it doesn’t, such as visiting /about
we append index.html
to make /about/index.html
. We’ll need to make this resolution more robust as time goes on, but this is fine for now.
One thing to note here is that we probably want to make sure every file ends in an extension, I think. Something to note.
Cool. Let’s beef up the index.html
and reference another file.
<html>
<head>
<script src="test.js"></script>
<style>
body {
background: #000;
}
h1 {
color: red;
}
</style>
</head>
<body>
<h1>Welcome to my new website!</h1>
</body>
</html>
Refreshing the test.app.localhost:4000/
and looking at the console we get this:
The resource from “http://test.app.localhost:4000/test.js” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff).
That’s because we’re saving all the MIME types as text/html
.
Note: A MIME type (Multipurpose Internet Mail Extensions type) is a standard on browsers that lets them know the format of a file. It lets them handle the file correctly.
def serve(conn, %{"path" => path} = _params) do
subdomain = conn.assigns.subdomain
with #...
do
conn
|> put_status(200)
|> put_resp_content_type("text/html")
|> send_resp(200, content)
else
_ -> # ...
end
end
The offending line is put_resp_content_type/2
.
To fix this, we need to
- Make sure we’re saving the MIME type correctly for each file
- use that saved MIME type when saving it
We can knock out the second bullet real quick:
def serve(conn, %{"path" => path} = _params) do
subdomain = conn.assigns.subdomain
scope = Scope.for_user(nil)
with
# ...
file when file != nil <- Hosting.get_site_file_by_path(scope, path)
# ...
do
conn
|> put_status(200)
|> put_resp_content_type(file.mime_type)
|> send_resp(200, content)
else
_ -> # ...
end
end
Of course that doesn’t solve the problem, because we’ve also been saving all MIME types in the database as text/html
:
project_dev=# select * from site_files;
-[ RECORD 1 ]------------------------------
id | 3
path | test.html
mime_type | text/html
size | 0
site_id | 1
-[ RECORD 2 ]------------------------------
id | 5
path | test.js
mime_type | text/html
size | 0
site_id | 1
-[ RECORD 3 ]------------------------------
id | 4
path | index.html
mime_type | text/html
size | 0
site_id | 1
To fix that, let’s check out our save function:
# lib/project_web/controllers/file_controller.ex
def save(conn, %{"content" => content, "path" => path} = _params) do
scope = conn.assigns.current_scope
path = Enum.join(path, "/")
case Hosting.get_site_file_by_path(scope, path) do
nil -> Hosting.create_site_file(scope, path)
site_file -> Hosting.update_site_file(scope, site_file)
end
:ok = Storage.save(scope, path, content)
conn
|> json(%{status: "ok", content: content, message: "Successfully saved."})
end
Looking at create_site_file/2
or update_site_file/2
you can see the MIME type inlined:
# lib/project/hosting.ex
def create_site_file(%Scope{} = scope, path) do
with :ok <- Storage.save(scope, path, ""),
{:ok, file} <-
%SiteFile{path: path, mime_type: "text/html"}
|> SiteFile.changeset(%{}, scope)
|> Repo.insert_or_update() do
{:ok, file}
else
_ -> :error
end
end
To fix this, I think we can just derive/update the MIME type for every changeset.
Let’s hop on over to site_file.ex
and add a check for mime type.
# lib/project/hosting/site_file.ex
def changeset(file, attrs, scope) do
file
|> cast(attrs, [:path, :size])
|> put_mime_type()
|> put_change(:site_id, scope.site.id)
|> validate_required([:path, :mime_type])
end
defp put_mime_type(changeset) do
case get_change(changeset, :path) || get_field(changeset, :path) do
nil ->
changeset
path ->
mime_type = MIME.from_path(path) || "application/octet-stream"
put_change(changeset, :mime_type, mime_type)
end
end
Cool. Now if we ever want to restrict mime_type we can add those restrictions to this put_mime_type/1
function.
With that, I re-saved the test.js
file:
// test.js
console.log("Howdy")
And upon refreshing http://test.app.localhost:4000/
the console reads:
Howdy
Okay, let’s see. While we could continue on this path, it’d be nice to develop the rest of the editor with multiplayer in mind after all. Probably easier to set up multiplayer editing now rather than tacking it on later. Let’s pivot.
Pars Prima Suspensus: Collab Arc
To achieve this, we’ll be using the YJS stuff. It looks like we’ll also be using some Phoenix built ins. Neat!
Anyway, let’s use the elixir implementation for YJS.
Note: YJS is the name of a library which does fancy resolution conflicts to support real-time apps
mix deps.get y_ex
Reading around
Alright, so this might be one of the more painful ideas to implement. Only a few weeks ago I was thinking implementing ProseMirror ought to be a cakewalk and here we are 3 procrastinated weeks later.
More reading
So what we need to do is just keep putting stuff together until it makes sense. Always works.
The first thing we want is to set up a phoenix channel.
This lets us use WebSockets for bidirectional communication.
More reading
So a couple of things I was confused on was authentication and how Channels differs from LiveViews (a separate idea within the Phoenix collective). You can just think of Channels as way lower level. We’re not going to cover LiveView in this.
Anyway, with websockets we can’t send any cookies over(?). Cookies which let the server know you’re you. So you need to create a separate token to verify against if you want authorization. This is covered by Phoenix.Token
.
Anyway, let’s create a socket and a channel. We’ll use this channel to then send YJS diffs and update in real time.
mix phx.gen.socket User
More reading
Actually, it looks like to establish a websocket connection an HTTP request is upgraded to a websocket. During that handshake, we can check the session. Neat!
Anyway, let’s just get the ball rolling.
For starters, let’s confirm the session auth works in the connect/3
callback.
Hour of messing around
Okay, so we generated that socket. In the socket we’ll authenticate the user. An authenticated user can then use all of the (future) channels.
# lib/project_web/channels/user_socket.ex
defmodule ProjectWeb.UserSocket do
alias Project.Accounts
use Phoenix.Socket
@impl true
def connect(_params, socket, connect_info) do
with %{"user_token" => user_token} <- connect_info.session,
{user, _token_inserted_at} <- Accounts.get_user_by_session_token(user_token) do
{:ok, assign(socket, :current_user, user)}
else
_ -> {:error, :unauthorized}
end
end
@impl true
def id(socket), do: "user_socket:#{socket.assigns.current_user.id}"
end
It took awhile to arrive to this point. There are a few lines in the docs you have to watch out for.
On the frontend we connect with this:
import { Socket, Channel } from "phoenix"
let socket = new Socket("/socket", { params: { _csrf_token: csrfToken } })
socket.connect()
The csrfToken
is required to make prevent XSS through websockets. If it’s a valid token, the machinery behind the Phoenix.Socket
implementation gives us the current session, which we can then get the current user from.
The id
is what lets us identify all the sockets for a given user. e.g. if you have multiple tabs open.
Cool. Now that we authenticated the user, we can now create “channels” to hang out in. Let’s just generate the generic example to see how this works.
mix phx.gen.channel Room
Then we need to expose the channel to our socket:
defmodule ProjectWeb.UserSocket do
alias Project.Accounts
use Phoenix.Socket
channel "room:*", ProjectWeb.RoomChannel
# ...
end
The room has a bunch of functions we can hook into so we can handle messages and broadcasting to other users.
I cobbled together some horrible React code to get a proof of concept working. As we march towards collaborative editing it’ll be cleaned up and make more sense. Anyway, the channels are working on the front end, but it isn’t anything cool.
Right now if you open two tabs you can see messages pushed to both.
Now that we figured that out, we need to actually hook this up to YJS.
First, let’s migrate the bad code to another store so we don’t have to think about the React lifecycle as much.
30 minutes of moving stuff around
Okay, so the store is working.
I think the next step is to just wire the rest of it up on the backend and go from there.
So looking at the example YJS code we need to spawn a process for each document (this allows us to hold it in memory for others to collab on).
One thing I’m concerned about is how our current frontend caching will work with YJS. But we’ll get there.
So since we’re using our own websocket implementation, the Phoenix one, we’re going to have to manually implement how y-websocket works. Which shouldn’t be too bad, I don’t think. I mean, for now we can just yoink this implementation.
For starters, let’s start spawning the documents.
Hours later
So I basically lifted the repository link above. We make a shared document on connect, and then broadcast.
The cursors are working between tabs, which is cool!

There are some issues when it comes to actually broadcasting the updates. Also tying them to the actual documents, like index.html
or test.js
. For some reason the document isn’t syncing correctly.
So I need to figure out what’s going on there first, but then we can get multiplayer editing at least working.
After that we need to figure out the caching story… and how it works with our current editor store. We’ll see.
All in all I’m thankful for this switch to wiring up the multiplayer experience first. It would’ve been pretty bad otherwise.
This chapter was a little barebones and a long time in the making. Goes to show it’s best to take the path of least resistance before chasing perfection, especially on the front end.
There are a lot of vague big goals right now littered through out this chapter, which we will hopefully tackle as time progresses.
But we’ll at least cover making the multiplayer experience work well in the next chapter. In that chapter I’ll also show how it all works on the backend. Also the editor needs a lot of polish.
Until next time.