Skip Navigation
Jump
What linguistic constructions do you hate that no one else seems to mind?
  • You may be fewer irritated by this with age

    6
  • Jump
    What linguistic constructions do you hate that no one else seems to mind?
  • Misusing words like "setup" vs "set up", or "login" vs "log in". "Anytime" vs "any time" also steams my clams.

    10
  • Jump
    Fossil: A Git alternative with batteries included
  • I use Fossil for all of my personal projects. Having a wiki and bug tracker built-in is really nice, and I like the way repositories sync. It's perfect for small teams that want everything, but don't want to rely on a host like GitHub or set up complicated software themselves.

    12
  • Jump
    Android is removing COVID-19 exposure notification settings
  • I had this set up the day it was available in my area. Never got an alert. I find it difficult to believe I wasn't "exposed" during the pandemic, so I assume this didn't really provide much value.

    109
  • Jump
    kamila 🌸: Google Pixel 8 series cases (thread)
  • Google cases always seem hit-or-miss. I just buy the same Spigen case for every phone. I know I like it.

    10
  • Jump
    why do americans seem to always have thick necks in photographs?
  • Oh so now you're saying I have a big focal length?!?!?!? What is it with you people

    1
  • Jump
    Ever so photogenic Bleptember
  • Looks like he realized he left the oven on

    6
  • Jump
    did someone say its bleptember?
  • Juicy

    2
  • Jump
    Alternatives to PGP/GPG?
  • If you (or anyone) want to send a message to try it: briar://acqmqrd2pmpm5nqaugnbkaby2na72glt72rjx3xkf25qtl4ruf5ss

    It seems pretty neat.

    2
  • Jump
    Alternatives to PGP/GPG?
  • Got it. So more for data at rest rather than handling the sending too?

    SimpleX does file transfer pretty well, not sure about Briar now that I think about it.

    2
  • Jump
    Alternatives to PGP/GPG?
  • Briar and SimpleX seemed decent the last time I looked into this.

    I ended up using neither because I don't need privacy when talking to myself.

    5
  • Jump
    How do you keep your home servers online during powercuts?
  • I have all of my important electronics (computers, entertainment center, network equipment) on CP1500PFCLCD. They're scattered around the house, so there are multiple CP1500PFCLCD.

    ...then there's a 22 kW gas generator that handles everything once it switches on.

    1
  • Jump
    Isn't he incredibly handsome?
  • I'll uhhhhhh take a large uhhhhhhhhhh frappe with uhhhhhh extra creme

    16
  • Jump
    Is Brave Search trustworthy?
  • I can recommend Kagi. Happy customer for over a year.

    Some reading to get started:

    15
  • Jump
    What FOSS calculator app are you using?
  • Nice, I like this one. Disappointed the units converter doesn't have pounds and ounces as a target, though. It only does pounds or ounces, not both. I tend to see a lot of quantities specified as pounds and ounces at the same time in my day-to-day activities.

    1
  • Jump
    Does your major in college really matters?
  • Playing devil's advocate, I'd be worried you'd avoid doing work you don't want to do, but is core work that needs to be done. Not all employers want or are set up to employ wildcards. You may have to make your own path here, too.

    7
  • Jump
    *Permanently Deleted*
  • Kagi's verbatim search does this. You will actually get no results if nothing matches. It doesn't change your search and give you something you didn't ask for.

    Quoting in a normal "All results" search works, too.

    21
  • Jump
    What's with telling YEARLY salaries?
  • The exchange rate is pretty good right now.

    6
  • Poking around the network requests for ChatGPT, I've noticed the /backend-api/models response includes information for each model, including the maximum tokens.

    For me:

    • GPT-3.5: 8191
    • GPT-4: 4095
    • GPT-4 with Code Interpreter: 8192
    • GPT-4 with Plugins: 8192

    It seems to be accurate. I've had content that is too long for GPT-4, but is accepted by GPT-4 with Code Interpreter. The quality feels about the same, too.

    Here's the response I get from /backend-api/models, as a Plus subscriber:

    json { "models": [ { "slug": "text-davinci-002-render-sha", "max_tokens": 8191, "title": "Default (GPT-3.5)", "description": "Our fastest model, great for most everyday tasks.", "tags": [ "gpt3.5" ], "capabilities": {} }, { "slug": "gpt-4", "max_tokens": 4095, "title": "GPT-4", "description": "Our most capable model, great for tasks that require creativity and advanced reasoning.", "tags": [ "gpt4" ], "capabilities": {} }, { "slug": "gpt-4-code-interpreter", "max_tokens": 8192, "title": "Code Interpreter", "description": "An experimental model that can solve tasks by generating Python code and executing it in a Jupyter notebook.\nYou can upload any kind of file, and ask model to analyse it, or produce a new file which you can download.", "tags": [ "gpt4", "beta" ], "capabilities": {}, "enabled_tools": [ "tools2" ] }, { "slug": "gpt-4-plugins", "max_tokens": 8192, "title": "Plugins", "description": "An experimental model that knows when and how to use plugins", "tags": [ "gpt4", "beta" ], "capabilities": {}, "enabled_tools": [ "tools3" ] }, { "slug": "text-davinci-002-render-sha-mobile", "max_tokens": 8191, "title": "Default (GPT-3.5) (Mobile)", "description": "Our fastest model, great for most everyday tasks.", "tags": [ "mobile", "gpt3.5" ], "capabilities": {} }, { "slug": "gpt-4-mobile", "max_tokens": 4095, "title": "GPT-4 (Mobile, V2)", "description": "Our most capable model, great for tasks that require creativity and advanced reasoning.", "tags": [ "gpt4", "mobile" ], "capabilities": {} } ], "categories": [ { "category": "gpt_3.5", "human_category_name": "GPT-3.5", "subscription_level": "free", "default_model": "text-davinci-002-render-sha", "code_interpreter_model": "text-davinci-002-render-sha-code-interpreter", "plugins_model": "text-davinci-002-render-sha-plugins" }, { "category": "gpt_4", "human_category_name": "GPT-4", "subscription_level": "plus", "default_model": "gpt-4", "code_interpreter_model": "gpt-4-code-interpreter", "plugins_model": "gpt-4-plugins" } ] }

    Anyone seeing anything different? I haven't really seen this compared anywhere.

    1