Last night, I tried to expand the functionality of but somewhere in the last three months, my LSP integration between it, 's gen_lsp_server, and completely. I also can't find any way of debugging it without rebuilding *something*. :(

I have a little time to work on *something* but I appear to have no energy to do anything.

I was thinking of but what can be done quickly?

I have a night to blow since I'm in a hotel alone but I have no clue what to do: website maintenance, continue to obsess about , or work on the next writing project.

Finally got working after a few weeks of refactoring and reworking to a newer skill level.

... except now it is telling me I'm overusing "." and "a". Might have missed something.

Well, I got over the big hump on and got the clustered word detector working again. The biggest one is I'm defaulting to 15 surrounding words instead of the entire document (which got nasty with my 25k word sample chapter).

Once I get the unit tests updated, I'm going to learn benchmarking so I can start measuring the O(n) problem I introduced with clustering verses document-level ratio.

All I want to do is have parse my 25k word chapter is under 500 ms. Is that too much to ask?

... or I could stop writing 25k word chapters.

So, all I want is to have... :)

Ug, last night, my effort to fix a bug with turned it from a O(1) to a O(n) and I'm not happy. It always comes down to the "compare X closest words".

I also need to seriously figure out working with array subsets in Rust.

But I need to write 10-15k words before I can go back to it.

Since I'm going into a binge writing session (family headed out this afternoon), I figured I'd dog-food more.

I was frustrated for my Pride month stories that I couldn't use it since the line numbers were off and I wasn't getting good error messages.

It ended up being a case of the crate being too helpful by trimming the string which works for 90% of their use cases but not for line numbers. I also couldn't get the right line after the `---` fences.

Show thread

Plus I have a queue of spell-check bugs that need to be handled which I've been sadly ignoring for a few months (mainly because my C++ work isn't strong enough).

Which leads into really needs work, but I've been stalling on that. I really shouldn't since that has tools to make my writing better. :D

Show thread

Author Intrusion 

My proof of concept of worked! I got it to find 9.5k echo words in my 600k word serial in 40 seconds. Since this appears to be a viable option, that means I have to switch from randomly working to project management and start issues in Gitlab for what is a 1.0.0.

Of course, it looks like I'm going to have to create a website for the NuGet packages, a lot of usability things, hooking to Atom via language services, but I see a possible path forward.

Author Intrusion 

Another productive lunch hour on while working on echo word detection.

I got it to correctly detect the same word used within a given range. With these plugin settings:

- scope: content
select: //token[@l>3]
within: 3
warning: 1
error: 2

And the file:

one two one three four one

It produces a warning on the first and third "one", an error on the second "one".

For xpath, I'll probably change it to `//token[length()>3]` because it's prettier.

Author Intrusion 

Writing via TDD is occasionally slow, but it also feels like I'm trying to write a solid base for the rest of the code. Since I've failed so often to write , this approach "feels" better to me.

Now, I don't do BDD, I do a lot of TDD testing on SRPd internals.

Show thread

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!