• 1 Post
  • 9 Comments
Joined 9 months ago
cake
Cake day: October 2nd, 2023

help-circle
  • Haven’t tried the other two, but I would say yes if you do roguelikes. The physics and reactions are the half of it, the wandbuilding mechanics let you build some completely bizzare and powerful wands, and with a little luck can start getting a godrun fairly quick… but you’re always vulnerable.

    Highly recommend going in blind, there are a lot of secrets to find, different sidequests, etc, winning the game once is a milestone.




  • +1 for Proxmox, has been a fun experience as there are plenty of resources and helper scripts to get you off the ground, jellyfin was the first thing I migrated from my PC, hardware encoding may give you a bit of a tussle but nothing unsolveable. Also note Proxmox is Debian under the hood, so you may find it easy to work with. I looked into unraid, it seems great if all you’re doing for the most part is storage, if you want Linux containers and virtual machines, proxmox js your bet.

    I got a small 4 bay 2U server from a friend on the cheap, 1000$ should get you relatively nice new or slightly older used hardware. Even just a PC with a nice amount of drive bays will get you started. And drives are cheap, a raid 1 setup was one of the things I did.

    In the end I’ll likely get a separate NAS rack server just to segregate functions, but as of now I simply have a Proxmox LXC mounted to my NAS drives and runs samba to expose them.

    Tailscale is a nice set and forget solution for VPN access, I ended up going the route of getting an SSL certified domain and beefing up my firewall a bit. The bit I’ve messed with it it certainly has a learning curve greater than openvpn, but is much more hardened and versatile.

    As for pihole, I’ve found AdGuard Home to be just about a suitable replacement, and can be installed along openwrt, though I have a bit of an unconventional router with 512MB of RAM so YMMV



  • bbuez@lemmy.worldtomemes@lemmy.worldA bit late
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Maybe a divide for you, my SO says she’d pick the bear if it wasn’t me. And I don’t blame her.

    Instead of arguing the merits of this debate, maybe its worth analyzing your own merits. Men (individually but amongst their peers) should be ashamed that women typically seem to want to pick a bear over themselves, instead of falling further into the rut that pushes everyone - not just women - away from their social circles and friend.

    Someone tells you they’d rather be getting mauled by a bear? Take the hint. The divide exists within your head, make friends, be kind, and you’ll find happiness

    Edited for individuals to contextualize on their peers instead of generically

    Edit edit, I mean go ahead, be reactionary


  • We do not have a rigorous model of the brain, yet we have designed LLMs. Experts of decades in ML recognize that there is no intelligence happening here, because yes, we don’t understand intelligence, certainly not enough to build one.

    If we want to take from definitions, here is Merriam Webster

    (1)

    : the ability to learn or understand or to deal with new or trying >situations : reason

    also : the skilled use of reason

    (2)

    : the ability to apply knowledge to manipulate one’s >environment or to think abstractly as measured by objective >criteria (such as tests)

    The context stack is the closest thing we have to being able to retain and apply old info to newer context, the rest is in the name. Generative Pre-Trained language models, their given output is baked by a statiscial model finding similar text, also coined Stocastic parrots by some ML researchers, I find it to be a more fitting name. There’s also no doubt of their potential (and already practiced) utility, but a long shot of being able to be considered a person by law.


  • I don’t want to spam this link but seriously watch this 3blue1brown video on how text transformers work. You’re right on that last part, but its a far fetch from an intelligence. Just a very intelligent use of statistical methods. But its precisely that reason that reason it can be “convinced”, because parameters restraining its output have to be weighed into the model, so its just a statistic that will fail.

    Im not intending to downplay the significance of GPTs, but we need to baseline the hype around them before we can discuss where AI goes next, and what it can mean for people. Also far before we use it for any secure services, because we’ve already seen what can happen


  • The fallout of image generation will be even more incredible imo. Even if models do become even more capable, training off of post-'21 data will become increasingly polluted and difficult to distinguish as models improve their output, which inevitably leads to model collapse. At least until we have a standardized way of flagging generated images opposed to real ones, but I don’t really like that future.

    Just on a tangent, openai claiming video models will help “AGI” understand the world around it is laughable to me. 3blue1brown released a very informative video on how text transformers work, and in principal all “AI” is at the moment is very clever statistics and lots of matrix multiplication. How our minds process and retain information is by far more complicated, as we don’t fully understand ourselves yet and we are a grand leap away from ever emulating a true mind.

    All that to say is I can’t wait for people to realize: oh hey that is just to try to replace talent in film production coming from silicon valley