You could try out Linux Mint¹, they're Ubuntu based and disable Snap by default².
Giliam de Carpentier wrote some software for fun to generate various optimized walking mechanisms. Combined with electronics and wood working skills he was able to turn one of these mechanisms into...
Giliam de Carpentier skrev noget software for sjov til at generere forskellige optimerede gangmekanismer. Kombineret med elektronik- og snedkerevner blev en af gangmekanismerne til et trådløst gående sofabord i træ. Carpentopod.
https://www.decarpentier.nl/carpentopod
It seems that we focus our interest in two different parts of the problem.
Finding the most optimal way to classify which images are best compressed in bulk is an interesting problem in itself. In this particular problem the person asking it had already picked out similar images by hand and they can be identified by their timestamp for optimizing a comparison of similarity. What I wanted to find out was how well the similar images can be compressed with various methods and codecs with minimal loss of quality. My goal was not to use it as a method to classify the images. It was simply to examine how well the compression stage would work with various methods.
It's a pillar of democracy to protect the autonomy of the people.
Wait.. this is exactly the problem a video codec solves. Scoot and give me some sample data!
I was not talking about classification. What I was talking about was a simple probe at how well a collage of similar images compares in compressed size to the images individually. The hypothesis is that a compression codec would compress images with similar colordistribution in a spritesheet better than if it encode each image individually. I don't know, the savings might be neglible, but I'd assume that there was something to gain at least for some compression codecs. I doubt doing deduplication post compression has much to gain.
I think you're overthinking the classification task. These images are very similar and I think comparing the color distribution would be adequate. It would of course be interesting to compare the different methods :)
The first thing I would do writing such a paper would be to test current compression algorithms by create a collage of the similar images and see how that compares to the size of the indiviual images.
Jeg sidder trygt og roligt på perronen med en kop kaffe og ser tosserne løbe rundt og råbe AI uden at de ved hvad det er eller hvad de skal bruge det til. Jeg har ro i maven, bortset fra at jeg begynder at kunne mærke kaffen lidt. Jeg gør som jeg altid har gjort. Jeg bruger de bedste værktøjer til at løse en opgave og det måles ikke på hvordan man udtaler værktøjets navn. Måske bruger det et neuralt netværk måske gør det ikke. Det er ret uvigtigt i forhold til løsningen af opgaven. Det er software. Det er et værktøj. Parametrene er korrekthed og effektivitet.
Hvis digitaliseringslemmingerne lige holder sig for ørene så har jeg en ting jeg vil sige... hallo! kan i høre mig?.. nej, godt.
Nogle gange er et digitalt værktøj slet ikke den bedste løsning.
Så er det sagt. Jeg bliver nødt til at smutte nu, for jeg skal lige høre en erfaren kollegas mening om noget inden min pause er slut. Jeg orker nemlig ikke at skulle spørge en maskine, der kan sætte ord sammen i en overbevisende rækkefølge. Hvis min kollega ikke ved det så ser jeg om jeg kan finde en bog eller nogle artikler fra nogle andre erfarne mennesker, der har sat sig ind i emnet. Vi ses.
Sorry, I don't understand 'Jeg sidder trygt og roligt på.......'
dom.push.enabled = false
What are your expectations for the software? I assume it's not enough to use a group chat and tell people where you are, but from the description you've given that would be my suggestion.
I think that B is a problem for everyones eyes :)
I have this in code I'm writing right now...
#ifdef DEBUG
#define DEBUG_PRINT(...) printf(__VA_ARGS__)
#else
#define DEBUG_PRINT(...)
#endif
It is the most straighforward way to get the state of things while hammering on the keyboard trying to mash up something that looks like a program.
That's a lot of trouble, you can just ask it if it's telling the truth.
I take it you haven't heard about Free Beer.
Desktop Applications
One of the most controversial changes of Chrome’s MV3 approach is the removal of blocking WebRequest, which provides a level of power and flexibility that is critical to enabling advanced privacy and content blocking features. Unfortunately, that power has also been used to harm users in a variety of ways. Chrome’s solution in MV3 was to define a more narrowly scoped API (declarativeNetRequest) as a replacement. However, this will limit the capabilities of certain types of privacy extensions without adequate replacement.
Mozilla will maintain support for blocking WebRequest in MV3. To maximize compatibility with other browsers, we will also ship support for declarativeNetRequest. We will continue to work with content blockers and other key consumers of this API to identify current and future alternatives where appropriate. Content blocking is one of the most important use cases for extensions, and we are committed to ensuring that Firefox users have access to the best privacy tools available.
https://blog.mozilla.org/addons/2022/05/18/manifest-v3-in-firefox-recap-next-steps/
Librewolf explicitly reject donations.
The quote is a derivative of something Bjarne Stroustrup said himself¹.
C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off
Ah okay, so you're not attempting to deprecate ALSA. That's a relief! :)
Er det alle 4 eller alle 8? ;)