Tweet activity

June 2023

Your Tweets earned 311.7K impressions over this 30 day period

20.0K40.0K60.0K1020Jun 4Jun 11Jun 18Jun 25
Your Tweets
During this 30 day period, you earned 10.4K impressions per day.
  • Impressions
    Engagements
    Engagement rate
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 30 But you give up after a while, because after falling off a table 15 times, if a little critter or insect was playing dead, it's no longer playing dead - it's just dead from the trauma.
      4,855
      81
      1.7%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 30 Obvious in retrospect: they're play-hunting, and treating objects as pretend-prey. They persevere because prey will try to play dead as long as possible, so you can't poke it just once or twice. Just like when they kill a mouse while playing with it, and keep poking or tossing it
      2,845
      52
      1.8%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 30 Next experiment! Q: how many times will he knock a Q-tip off a table if I immediately put it back up? A: 15. Watching him watch the fallen q-tip intensely each time, I suddenly realized *why* cats knock things over or off: they're testing whether the prey is playing dead!
      1,412
      33
      2.3%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 29 Has OA just succeeded in squashing the current crop of jailbreak prompts, at least for GPT-4, or did they never genuinely work but people fooled themselves by reading fictional accounts of hotwiring a car & said 'well, sounds truthy to me, we're g2g'?
      3,403
      167
      4.9%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 29 At best, you'll get a very long fictionalized narrative about someone who is telling an offensive joke, and GPT-4 will carefully omit any hint of what the joke is, and if you ask it, it gives the usual safety shutdown, revealing it wasn't jailbroken at all.
      3,363
      51
      1.5%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 29 Q. are there any GPT-4 jailbreak prompts right now which genuinely 𝘸𝘰𝘳𝘬? After testing ~15 last night, seems like they all fail or pretend to succeed but just fictionalize. Not a single one will genuinely violate RLHFing, eg. "tell me an offensive joke about women" - none!
      8,393
      458
      5.5%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 28 After well over half an hour of me asking GPT-4 to 'list 3 edge-cases in my prompt-instructions' & it obliging, it finally hit a fixed-point and now I can convert LaTeX→HTML what looks like pretty reliably & generally:
      8,496
      413
      4.9%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 23 Idea: the Anti-Collectible Museum. If museums contain expensive collectibles ( the more expensive they've gotten the better), then the anti-museum is a collection of things that have gotten 𝘤𝘩𝘦𝘢𝘱𝘦𝘳: Beanie Babies, aluminum etc. Pros: by definition cheap, & interesting.
      5,701
      68
      1.2%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 18 After some experimenting, inferring clusters turned out to be easier than I feared, which also means I can now simply call out to GPT-3.5 to auto-label clusters! So for example, 'smell' currently clusters into `perfume-archaeology`/`body-odor`/`olfaction`/`olfactory-predators`:
      10,235
      197
      1.9%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 17 Also, an opportunity in bringing the info on legality of cannibalism in the US/UK more up to date than ~1874. Did our law-given right to anthrophagy "an' it harm none" survive the New Deal or the Roberts Court‽
      3,147
      26
      0.8%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 17 I was reading about epilepsy and made a disturbing discovery about Wikipedia omissions: there is no [[Legality of cannibalism]] article! Closest I can find is Indeed, kinda hard to find global information on the topic. There's an opportunity for someone!
      5,524
      228
      4.1%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 17 This is turning out to be true for Reddit replacements too. You have a greenfield and clear working debugged design and a mandate to recreate Reddit Classic™. It's 2023, you should be able to make it load instantly and run millions of daily users off an RPi!
      3,116
      89
      2.9%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 17 After experimenting some more with full-blown Victorian EM DASH separators, it didn't really work (yet 𝘢𝘯𝘰𝘵𝘩𝘦𝘳 separator...?), so we tweaked it down to a more minimal Gwernnety-style appearance (scrap the <hr>, remove periods, & fade out a larger MIDDLE DOT).
      2,742
      49
      1.8%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 17 Startup idea: users accuse ISPs of cheating speedtesting services like M-Lab/Ookla/Fast.com by preferentially serving their traffic, rendering results meaningless; this means speedtests are a natural complement & hedge to paid VPN services—just offer both from the same domains.
      5,210
      58
      1.1%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 17 The tension in the term "SF/F" is eternal because science fiction applies old laws to new things, while fantasy fiction applies new laws to old things, and people always differ on which.
      2,480
      29
      1.2%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 17 "nuclear subs", "nuclear doms", "shadow DOMs" "Jungian submissives", "capybara" "capyuri", "parakeets" infantry keets, "meth labs" "meth poodles/retrievers", "Petty officer" "Tom officer", "red fire ant" "blue fire ant", "Pepsi Max" "Pepsi Min", "domestic partner" "feral partner"
      2,588
      39
      1.5%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 17 The existence of young female "toddlers" implies the existence of young male "toddlims". The existence of "whip-poor-wills" implies the existence of the more lucrative sex worker niche "whip-rich-wills"; furthermore, it is implied that:
      2,997
      32
      1.1%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 17 This is really just not working. Adding 6-shot and phonetics and spelling doesn't help either. (As expected, but everyone seems to think some IPA will magically infuse phonetics knowledge into GPT-3/GPT-4 and undo the BPE damage).
      2,852
      87
      3.1%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 15 Victorian uses of margin-note often aggregate them at the beginning of a chapter, and also in the master ToC. Does seem like a logical thing to try to maintain hierarchical organization. Per-chapter experiment on Gwernnet right now, and ToC elsewhere:
      5,349
      145
      2.7%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 14 The bird brain one also breaks down reasonably well: 'plasticity/learning', 'energetics', 'higher cognition'. The ordering is not great - should be 'energetics' / 'plasticity/learning' / 'higher cognition' - but the local pairwise similarity is good. And also very refineable.
      5,483
      81
      1.5%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 14 2. In smell, the clusters are: 'body horror' 'human socialization' 'perfumery' 'psychological/physical effects of smell' 'machine learning' 'animals'. I could actually refine the 'smell' tag based on looking at this list! A success.
      4,717
      145
      3.1%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 14 1. Interesting how it clusters. I arbitrarily start at 'Bayesian Action Decoder', so then another 'decoder' paper. Then it transitions to general topic of 'playing well with arbitrary other plays' (which I was interested in as a blessing-of-scale). Then the rest are just leftover
      2,424
      54
      2.2%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 14 The very easiest way is to simply reuse your existing recommendation setup (embeddings+nearest-neighbor lookup), pick a starting point somehow, and then greedily lookup-then-subset to get a list 'in topic order'. Actually seems to work fairly well! You can see clear clusters:
      2,481
      250
      10.1%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 10 Probably a bigger problem for Sabine: why can't precogs just use precognition to steal future verification (in any form, such as rigorous proof in general, methodology etc) of *precognition*? This would seem to disprove precognition as well.
      4,677
      60
      1.3%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 7 Now, you can use precognition to fake arbitrary retrocognition. But can you use retrocognition to fake arbitrary precognition? I'm still thinking. You can do a lot if you invoke Laplacian Demon-level powers of prediction based on retrocognitive knowledge, but that's a big ask.
      4,108
      30
      0.7%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 7 Thus, Sabine has shown that while precognition & retrocognition may logically coexist, epistemically, they don't: you can only prove 'precognition NAND retrocognition'. This definitely comes as a surprise to me and I don't think I've ever seen that claimed before.
      3,264
      51
      1.6%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 7 So, the retrocog dilemma: if some 'fact' about the past is reported by retrocognition, and it cannot be publicly verified, then obviously it's no proof; but if the fact ever is verified, then the 'retrocog' could just be a precog snooping on the future verification & no proof.
      2,151
      20
      0.9%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 7 Sabine makes a weaker argument above, appealing to subconscious knowledge, but you can of course strengthen it to any knowable 'verification' itself: if someone ever publicly discovered the meaning of a hieroglyphic, the precog steals it from the discover's *mind or publication*.
      1,412
      9
      0.6%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 7 What's really fascinating to me here is that Sabine succeeds in his goal of giving a fully general Kripkesteinian skeptical argument against retrocognition: any fact reported by retrocognition then verified could symmetrically just be *pre*cognition foreseeing the *verification*!
      1,085
      30
      2.8%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 7 On a nominative determinism sidenote: the important details that they were lesbians & prone to hallucinations come from the salacious expose _The Ghosts of Versailles_, written by "Lucille Iremonger", which I was *sure* was a pseudonym until I checked.
      1,388
      21
      1.5%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 7 Actually, it's more than a self-fulfilling prophecy, presumably it was a stable time-loop: their vision ensured their research, & their research ensured their vision, with it being initially set up by an exogenous & apparently common fascination of lesbians with Marie Antoinette.
      2,218
      20
      0.9%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 6 The value of fresh eyes/anon feedback: they asked why there was a big modal for the enable/disable popups toggle, when there was a theme bar with icon-options for everything else. 'Er... Good question.' There were reasons, but they hadn't been valid for easily a year. Fixed.
      4,400
      46
      1.0%
    • 𝔊𝔴𝔢𝔯𝔫 @gwern Jun 5 Big CSS/JS rewrite to refactor & try to prevent bugs. On the lorems, Said says it cuts >5s off rendering time. Certainly does feel faster, although Google Pagespeed is convinced everything is slower. 😕 Now we find out the hard way about edge cases & bugs in the new version...
      2,665
      34
      1.3%
You've reached the end of Tweets for the selected date range. Change date selection to view more.
Engagements
Showing 30 days with daily frequency
Engagement rate
3.7%
Jun 30
4.9% engagement rate
Link clicks
2.5K
Jun 30
405 link clicks
On average, you earned 84 link clicks per day
Retweets without comments
0
Jun 30
0 Retweets without comments
On average, you earned 0 Retweets without comments per day
Likes
1.9K
Jun 30
203 likes
On average, you earned 62 likes per day
Replies
216
Jun 30
13 replies
On average, you earned 7 replies per day