Results 1 to 7 of 7

Thread: City Map by MidJourney

  1. #1
    Guild Journeyer Facebook Connected schattentanz's Avatar
    Join Date
    Oct 2013
    Location
    Ladenburg, Germany
    Posts
    127

    Discuss City Map by MidJourney

    Hi Guildies

    I just tried my hands at midjourney and wanted to check, what would happen, if I let an artificial intelligence draw a city map.
    The result is impressive, actually:

    schattentanz_city_map_medival_cathedral_castle_market_place_8k__6e83300b-b797-4419-91b9-c31694bb.png

    All that's lacking are some labels, but technically this is far better than anything I could come up with ... maybe I need to experiment some more with keywords to get different results ...

    What do you say?
    Do you like that map?
    What would you change / add?



    Kind regards,
    Kai
    CatZeyeS Headquarters - home of free miniature skirmsh game rules and supplements

  2. #2
    Community Leader Jaxilon's Avatar
    Join Date
    Nov 2009
    Location
    A beach in Ecuador
    Posts
    5,548

    Default

    I was wondering how midjourney would do with something like this. It is interesting for sure and I figured if I had time and the money to spare I'd like to play with it myself. Thanks for sharing the result.

    I think I could do better by hand but the amount of time saved makes me think I'd use something like this to create a starting point and then overlay my own work.

    I wonder what would happen if you entered "style of Mike Schley" for example.

    This piece is kind of high level overview. Would the AI actually create more detailed buildings? ala a google earth photo?

    Anyway, cool stuff.
    “When it’s over and you look in the mirror, did you do the best that you were capable of? If so, the score does not matter. But if you find that you did your best you were capable of, you will find it to your liking.” -John Wooden

    * Rivengard * My Finished Maps * My Challenge Maps * My deviantArt

  3. #3
    Administrator Redrobes's Avatar
    Join Date
    Dec 2007
    Location
    England
    Posts
    7,269
    Blog Entries
    8

    Default

    I was going to have a pop at that kind of thing when I could get my hands on something to try. There's been some posts about stable diffuse or something like that which you can get the code and run but my graphics card would not handle it.

    Everyone is sort of mulling over in their heads whether these bits of art are original and who owns the copyright and whether a machine can get a patent and all sorts of things like that. I think that in time it will be seen as a tool to create art only that this tool does an awful lot in one "stroke". But I have seen some great movies on youtube where the user paints over sections that it doesn't like and then the computer has another go at filling in the missing bit of it and when its good you keep it and keep working it in sections. Then it really feels like a tool rather than it creating the whole image. But its a smooth grey scale from one opinion to the other and its hard to pin down an absolute idea of what it is. Certainly its well interesting tho.

    In terms of your map, well I think its more like a piece of art than a true map. It has lines and shapes but a map is more of a representation of spatial data done in a way which highlights certain features of it that the cartographer chose. Usually colour and arbitrary shape take a back stage to the forms having some real world or fantasy world representation in a "real" space.

    I would imagine that if you had an engine trained on loads of images of just maps instead of the historical museum art pieces of the world then it might be more typical looking. But ultimately, unless the engine is somehow mapping the colour and shape to something which represents things in a world then its unlikely to look like a map.

    But total kudos for having a go. Certainly I can see a lot more of this type of stuff being used to make maps. This whole area is so new I have not really formalized in my head how I feel about it other than the technical fascination of it working in any form whatsoever. I really think that computers, if not humanity, passed a sort of rubicon with Dall-E, the chess engines and some of the text based AI where it has parity with Turing test stuff that has been happening recently. It sort of feels like a time when people went out exploring the world with no idea what they are about to come across. The next few years of computing is going to be weird.
    Last edited by Redrobes; 09-26-2022 at 01:38 PM.

  4. #4
    Administrator waldronate's Avatar
    Join Date
    Mar 2007
    Location
    The High Desert
    Posts
    3,611

    Default

    How well these things work is pretty much a matter of the quality of the input data set combined with the amount of computing power expended in training it. The models will very broadly reproduce associated parts (for example, if there are no items in the input corpus labeled as "Mike Schley", then there won't be anything generated in broadly that style). The cost of training the models is astronomical (I saw numbers on the order of $5M for training GPT-3 and $600K for Stable Diffusion), but generating outputs from the model takes far less computing power and is well within the reach of common home systems. It really comes down to how long you're willing to wait to get the results because more compute gets results faster.
    Because the input corpus for most of the current models (DALL-E, Stable Diffusion, MidJourney) wasn't particularly well vetted as far as terms-of-use, the outputs are equally suspect as far as reuse goes. I saw an article where Getty Images won't accept results generated from these systems because of this issue. For example, if someone goes out and finds everything with "Mike Schley" as an associated term and trains a model from that input, then the outputs from that model are totally unfit for any purpose except sale by Mike Schley.
    There was a nice article the other day (I don't recall the source now because I'm getting old and feeble-minded) about someone who used Stable Diffusion outputs for generating palettes from a prompt. Basically, it generates an image, quantizes the image down to something like a 16-color image and uses the palette from that. It's useful for getting colors for a text-described mood, but ultimately will just be a reflection of the training data. Stable Diffusion, for example, was trained on the LAION-Aesthetics V2 dataset, which has a lot of brightly-colored artwork. That means that you'll probably get very nicely colored palettes.
    Style transfer has been a fun toy in the last few years and it's very broadly in the same vein as image synthesis. Being able to push an artist's style onto a photograph is quite fun!
    Quite aside from image generators, the code models are very scary. GitHub co-pilot, for example, will generate large bodies of code based on a large number (perhaps most of) the public repositories on GitHub. In theory, it was only supposed to use only public and permissively-licensed repositories, but I've Twitter posts from folks whose opinion I respect that they were able to get it to reproduce significant bodies of non-public code, including the non-permissive license statements. There is a checkbox that says that it's not supposed to include that, but the outputs were interesting as well. It certainly generated a large amount of code very quickly, including some results that were flat-out wrong. I suspect that the future may hold writing tests for machine-generated code for far too many new entrants to the field.

  5. #5
    Guild Journeyer Facebook Connected schattentanz's Avatar
    Join Date
    Oct 2013
    Location
    Ladenburg, Germany
    Posts
    127

    Default

    Right, first things first:
    I'm a midjourney noob and I cannot make it create the things I want. (What I want: A cowboy dueling a sci fi trooper in front of a fairy castle)
    I just learned a couple of days ago, that midjourney currently is not able to picture multiple objects in a proper way and you'll get better results, if you ask for one object only.
    Also, the results depending on keywords entered are pretty random: I've seen someone enter "red bear" and get pretty nice results, whereas my query for "gunslinger" just sucked.

    The whole AI generated stuff is pretty new. In German we'd say it is in the "shoes of a child" (just btw ..)
    So there is still soooooooo much, all those machines have to learn.
    Also, the use of the machines still has to be learned: To be honest, I don't have a clue how to feed midjourney with proper terms to get a believable picture of a (demi-)human.
    So I tried my luck at a city map.
    The result is pretty impressive for what it is - at least in my eyes - but I agree with "it looks more like a piece of art":
    A proper city map would need to have "highlights". Ok, this one does have some (the red areas). Maybe because some of my keywords were "cathedral, marketplace, castle".
    Maybe I would need to outline the streets and adjust their colour or something.
    Also, some lables to "highlight" the highlights even more, a "what can be found where" box - that kind of stuff.
    But for a basis to start working with, I'd say it's not that bad ..



    Kind regards,
    Kai
    CatZeyeS Headquarters - home of free miniature skirmsh game rules and supplements

  6. #6
    Professional Artist Naima's Avatar
    Join Date
    Mar 2010
    Location
    Italy
    Posts
    1,586

    Default

    I already tried some tests with MJ in the past but the results where not that great, the AI lacks any kind of "imaginative" speculation and the results are mostly randomic ... though when it copies it makes some decent stuff...

    AI MAp.png

  7. #7
    Guild Artisan fol2dol's Avatar
    Join Date
    Apr 2016
    Location
    Montreuil, FRANCE
    Posts
    542

    Default

    Hi,
    @schattentanz
    Did you tried multi-prompts?
    I found a tuto that helped me with this kind of dilemna.

    Multiprompts. ( :: ) Try this:
    two subjects :: Teddy Roosevelt fighting bigfoot :: action pose full body shot, in style of EC Comics, Al Feldstein ::2 realistic paper, four color printing, vintage comic, vibrant color, retro style, high contrast, inkwork, fine detail, max rendering
    One of the easiest ways to use multiprompts is as parallel, non-contradictory descriptions of the same scene. So, 'two subjects' is an easy description for it to figure out. And then a more complex description of the same scene, 'Teddy Roosevelt fighting bigfoot' hangs on that. Then you can add additional prompts. My first test with this was that it was losing some of the artistic style you were after, So I separated out 'action pose full body shot, in style of EC Comics, Al Feldstein', and gave that section of the prompt some weight (::2), telling it to focus more on that. The those less important words like 'realistic paper' are all combined in a single part of the prompt, but they're relatively weak, so now it knows that if it's got to choose between two subjects, and realistic paper, it'll choose two subjects.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •