• 0 Posts
  • 34 Comments
Joined 1 year ago
cake
Cake day: June 22nd, 2024

help-circle
  • Not what it implicitly advertises, unfortunately. It lists all files (ls) recursively in all subdirectories (-R), one per line with details (-l), sorted by time, newest first (-t). Only the first 10 files are shown (| head).

    The problem is that the files are sorted by time per directory, and ls recursively descents into subdirectories in that order. It’s not a “Depth First Search”, if you’re so inclined. Effectively, this shows the newest 10 files/dirs in the current directory before diving down, and if you have less files/dirs than that in your search base directory, you probably don’t need this hack to begin with.

    In good tradition, here’s something that actually works as likely intended. find recursively lists (only) all regular files (-type f) starting in the current directory (.) and runs the ls command (-exec) to show details (-l) of each file passed as arguments ({} +), including a specific, sortable time format (--time-style). The resulting comprehensive list of all files is then sorted in reverse (-r) order, using the sixth whitespace-separated column of each line/file as the key (-k6), which just so happens to be the “sortable time format”. Lastly, only the 10 most recent files are shown (| head), as before:

    find . -type f -exec ls -l --time-style=+"%Y-%m-%dT%T" {} + | sort -r -k6 | head

    Running this is a great way to start your day! It’ll give you ample time to brew some coffee or tea, slip into your most comfortable programmer socks, and finish lunch by the time it scanned your 18.3 TB of furry smut to show you what you “touched” last.

    It’ll likely be irrelevant cache files, though, if you run it from your $HOME. Excluding directories is left as an exercise for the reader.


  • Gyroplast@pawb.socialtome_irl@lemmy.worldme_irl
    link
    fedilink
    English
    arrow-up
    12
    ·
    9 days ago

    Oof. This short thread made me feel really old. Allow me, I’ve been waiting for this for decades:

    When I was young, we only had HTML4 with a ton of nested tables and animated GIFs, if you’re lucky! None of those fancy <div>s or CSS of yours! If you wanted “dynamic content”, you used SSI or questionable executables in your cgi-bin! Centering content was a proper, manly challenge! And don’t forget the “Optimized for IE 3.0!” 88x31 badge below the huge, blinking, centered(!) UNDER CONSTRUCTION image!

    THAT’s peak web design for you, everything else is just rainbow sprinkles!!1!
















  • TL;DR: Don’t think of the AUR as a package source, but as of an only mildly moderated, but ultimately free and open, sharing platform for PKGBUILDs, primarily useful for (self-)packagers, not necessarily non-technical end users.

    Before the AUR, you had people individually hosting their PKGBUILDs anywhere, sometimes on GitHub or the BBS (yeah, it’s been a while), sometimes along with a repository URL you could add to your pacman.conf to install packages right away, and it was glorious. I didn’t have to write a working PKGBUILD myself from scratch, and I could decide if I trusted that particular packager to not screw me sideways with a pre-built package. An officialized “Trusted User” (TU) role emerged from this idea, which has recently been renamed to Package Maintainer (PM). This is fundamentally still how the AUR works, it just became much bigger, and easier to search for particular software. Packagers gift to you their idea of how software should be packaged, for you to expand upon, take inspiration from, or learn, or use as-is if you determine it to be good for your purpose.

    The AUR is ultimately a great resource for packagers, and still useful for users, but “true end users” get the extra repository, and community, kind of, before that, and should try to avoid the AUR if they can, or at least be prepared to put in effort to establish trust, or get help.

    A handful of Package Maintainers are manually adopting and subsequently vetting for sufficiently popular packages to move them from the AUR to the official extra repository, which is deemed safe to use as-is, on a best-effort basis. Obviously, this is a bottleneck, as it is not feasible for the few volunteering PMs to adopt and maintain 10k+ AUR packages and be held to any quality standard. That’s why “you are on your own” with the AUR.

    On the positive side, there’s a voting system to determine package popularity. AUR packagers have a public list of maintained packages, and a comprehensive git commit history. Establishing trust is still crucial, and I feel hard pressed to name a reasonably popular/useful package that isn’t already in extra or has been maintained in the AUR for a long time.

    The biggest risk, IMHO, for malware getting slipped into a package is orphaning a popular package, and having it adopted by a malevolent user. This is something I personally look out for. If the maintainer changed, I make sure to check the commit history to see what they did. Most of the time it’s genuine fixes, but if anything is changed without a damn good and obvious reason, hit up the AUR mods and ask for help. This is how malware is spotted. Also, typically only the version is bumped in a PKGBUILD on an update, which is a change I feel safe waving through, too. If the download URI changes, or patches are added, I do look at them to determine the reason, and if that isn’t explained well enough to understand, that’s a red flag. Better ask someone before running this.

    source: personal involvement in Arch since 2002