The rules of the game are simple:
- You go to geoguesser
- You set it to world map
- You go through locations, through multiple games, driving around those locations
- Until you see a cow
- When you find a cow, you win
The rules of the game are simple:
would love to but apparently it's paid now and has ranked competitive and coins and bullshit
Results! Turns out there's a free site called OpenGuessr that's just the classic gamemode we all know and love, but you can just play it for free like normal.
And I found some large friends on my very first leap!
Congrats! You win. And thanks for the info on the free alternative
"Original" Sin is what i've titled this piece. by me. sorry if you don't have "collapse long posts" enabled. I have many thoughts.
Hi. If your takeaway from this is you agree about my thoughts on inspiration, but you very much disagree about what I’m implying about AI image generation… then you are missing the point of this comic-essay, unfortunately.
Harry’s AI image generation comment sparked this whole thing because it was the one thing he didn’t bother to source, research, or explain correctly… which I thought was ironic, in a video about plagiarism—the only relevant article he shows on screen to show that OpenAI got sued for copyright violations (and user privacy violations, they are separate incidents) is here. It was authors suing OpenAI because ChatGPT summarized their books in detail, and therefore must have stolen it. That’s it. That’s why it’s “COMPLICATED STEALING”
The point of this comic is for you to question the very concept of ownership over art, and to stop and consider the fearmongering and misinformation over AI “stealing” from artists. The point is for you to question copyright and what “stealing” even entails. The point is that nothing is original, whether made by a human or a human-made-machine.
Post I made that goes more in depth about AI art, different models (including Creative Commons and public domain models), and what my art industry mentor’s thoughts on AI (hint: he does not think it is that deep). Has alt text for the visuals.
Post discussing the copyright lobbying in regards to AI and how the Copyright Alliance was attacking AI image gen and the Internet Archive: the Copyright Alliance does not care for the fanartist or small artist. This post was a huge eye-opener for me, and made me curious enough to figure out how Stable Diffusion really works. (And through the OP, had me discover that disabled artists use AI and are working towards creating ethical models—really fucking cool!)
I also have a post from last year documenting my first steps into experimenting with AI and integrating it into a workflow. Though admittedly, I am very reluctant sounding. The laws on AI copyright weren’t clear, and there were no Creative Commons 0/Public Domain trained models at the time.
And finally, this video essay about modern art by Jacob Geller. You may have seen it, but worth a rewatch—pay special attention to what the criticisms of modern/contemporary art are. And pay attention to what he says about effort.
That’s all. It’s okay to disagree with me, but those are my thoughts.
You do not need to extend this level of grace and thoughtfulness to “AI artists” because as a collective, they do not give one single fuck about the ethical responsibilities of creating art. They will ask their AIs to exactly copy the styles of existing artists. They will build LoRAs, finetuned models trained on as few as three or four images, usually ones they don’t own and never asked for permission to use. They will take your drawings and run it through ControlNet/img2img to launder plagiarism. If an image appears so many times in the dataset that the AI can reproduce it from memory, it is not considered a pressing issue.
It is not inherently true that AI art tools simply kitbash together existing images. The assumption behind AI models was that they’re too small to memorize the dataset and therefore have to heavily “compress” it into general concepts and ideas. That assumption was historically true, but it’s being tested by larger models trained on smaller datasets (there are only so many “high quality”, high resolution images). And the AI community doesn’t care. In fact, the artists in particular see it as a goal. Unlike you, they want to make their art as un-transformative as possible.
There are “good ones”. But they’re not numerous or influential enough to steer the direction of AI art compared to the literal millions of prompt gurus posting 50 gaudy Beeple pastiches a day to DeviantArt, or the e/acc death cultists at OpenAI whose ethics are whatever the bare minimum is to avoid regulation, or the trads and fascists who are slowly turning AI art into a right-coded medium one retvrn post at a time (there is a large community on 4chan, in particular, who loathe artists as a category and sees AI art as a weapon to destroy their careers). The fact that you’re considering these questions is by itself proof that your art is meaningfully separate from theirs. You are actually trying.
https://twitter.com/kingdomakrillic
Pillowfort doesn’t seem to have caught on (plus it’s still in beta) and I need to feed my social-media addiction somehow.
Anonymous asked:
I’m not sure exactly, but maybe typing “pip” over “pip3″ would work.
(Also, this blog is inactive. Try asking /r/gameupscale)
This guide is outdated. Please check https://www.reddit.com/r/GameUpscale/ for more up-to-date info.
Note: ESRGAN training appears to be slower on Windows than Linux by around 5x, at least on my machine. I don’t know the cause of this.
If you haven’t gotten ESRGAN set up for testing, please read this. Everything needed to test ESRGAN is also needed to train it - Python, CUDA, etc. https://kingdomakrillic.tumblr.com/post/178254875891/i-figured-out-how-to-get-esrgan-and-sftgan
If you’ve already done all that, go to Step 1.
1: Download and install Microsoft Build Tools 2015. It’s needed for one of BasicSR’s dependencies.
Then go to the command line and paste in this: pip install numpy opencv-python lmdb
2. Download BasicSR and the ESRGAN pretrained models.
https://github.com/xinntao/BasicSR
https://github.com/xinntao/BasicSR#pretrained-models
Place the models in (BasicSR directory)/experiments/pretrained_models
3. Download a dataset. The BasicSR creator uploaded several datasets to use here, but there’s plenty of other datasets you can use online. 1000 tiles (see step 5) are the absolute minimum for getting good results, but the more, the better.
https://github.com/xinntao/BasicSR#datasets
Make absolutely sure that none of the images are greyscale or indexed, or have alpha channels. RGB only,, or else you will get a “Sizes of tensors must match” error. You can use InfranView or BIMP (see below) to convert the images to RGB.
4 You will need to split your “training” and “validation” images. Take about 5-10% of your images and put them in a separate folder; these will be your validation images.
5. You will also need to convert your dataset into fixed tiles. Open up codes/scripts/extract_subimgs_single.py.
Change crop_sz to 192 or 128 (I’d stick to the latter unless you have a beefy graphics card), input_folder to the full path name of your image folder, and save_folder to where you want to save the tiles to. If you’re using Windows, replace all the slashes (“") with double slashes, as ”" is an escape character.
Example:
input_folder = ‘C:\Users\Username\BasicSR-master\General100’
save_folder = 'C:\Users\Username\BasicSR-master\General100_tiles’
Double click to run it. Repeat this process for the validation images.
6. You will need to batch convert these HR tiles to 4x downscaled versions. Download and open InfranView (https://www.irfanview.com/), press B to open the batch convert dialog, check “Use advanced options” and then click “Advanced” button to access the resize settings. You may want to check “Change Color Depth” or add some JPG compression, noise or dithering if you’re specifically training it for low quality images. It also helps make the results smoother overall. Make sure that both the LR and HR images have the same format and filename.
If you have GIMP installed, you can also download a batch manipulation plugin called BIMP and process the image that way.
7. Go to codes/options/train/train_ESRGAN.json and make the following changes:
name: change to whatever you want, removing the “debug” from the name
scale: Default is 4, meaning that the HR images are 4x larger than the LR images. I’d recommend leaving it as it is, but if you do set the value to something else, you will need to alter path: {pretrain_model_G (see below)
train : { dataroot_HR: location of the training HR images
train : { dataroot_LR: location of the training LR images
val : { dataroot_HR: location of the HR validation images
val : { dataroot_LR: location of the LR validation images
path : { root: the location of the BasicSR directory
train : { HR_size: the size of the HR tiles. Leave at 128 if you’re getting “out of memory” errors.
train : { batch_size: You could lower this number if you’re getting “out of memory” errors, but that produces other errors on my Windows installation. “n_workers” may be an alternative.
train : { val_freq: How often the model will be validated. Defaults to 5,000 iterations (5e3), so feel free to lower it.
path: {pretrain_model_G: The model ESRGAN will initialize from. If using a scale other than 4, set to “null” without quotes
logger: { print_freq: How often the program will update you on how many iterations have passed. Set this to as low as 1 if you’d like
logger: { save_checkpoint_freq: How often a new model will be saved. Set it to the same number as val_freq.
Again, make sure to use double shashes.
8. Use the command line to navigate to the codes folder and run this command: python train.py -opt options/train/train_esrgan.json
You could also create a .bat file so you can just double click, though that does make it harder to find errors (as an error will close the command prompt instantly).
9. You can check on the model’s progress by going into the “experiments” folder. Your older sessions will have an "archived" in their name, while the latest session will not. Inside each folder is the “models” folder, which is where new models are saved, and the upscaled validation images will appear in “val_images”. Once you’re satisfied, hit Ctrl-C in the terminal to quit training and copy one of the “G.pth” files to ESRGAN’s models folder.
The “training_state” folder contains .state files that let you resume progress after you’ve stopped. Just add -resume_state next time you run train.py, plus the path to the file.
If you’re feeling brave, you can mess with the GAN weight, feature weight and pixel weight in train_ESRGAN.json or initialize from a different model instead of RRDB_PSNR_x4.pth