DALL-E-2 In and Out Painting

See AI Fixing for detailed description on how this was done






Style Transfer

Nightcafe applying Self Portrait (Vincent Van Gough) to the photo of me as a Stormtrooper, did not work out as expected but the pattern was applied, so a learning experience at least
Originally an oil painting by me, using the original as a guide, trying different AI styles and tools. The ImageFX variant is from a description of the artwork but not the artwork itself as at time of trying, Whisk (ability to use existing images in Google Labs) was not available in the UK.



Originally a photo of a de Havilland Heron aircraft I took on the ground at the de Havilland museum, then worked with Photoshop to make the aircraft fly, using a sky photo I also took, a lot of work (hours) in those days, the AI variant took only seconds

Original artwork by me, insipred by a school visit to the National Gallery, these paintings seemed to be a big thing at some time in the past and it seemed perfectly reasonable to me to also do some. Interestingly, with all the controls on AI it is much harder to replicate what I made in the 1970s/1980s in the 2020s.






An original chalk pastel sketch I made used as input for various AI generators/editors







An original gouache painting I made used as input for various AI generators/editors


Guided Generation

The application GauGAN2 assist you in creating AI generated landscapes, where you can draw a sketch and the AI enhances it





The application Artbreeder has a Collage option which can assist you in generating images. Here I have provided a background image and a prompt of deer in a forest then rendered an image, which you can re-render with different random seeds

The application Stable Diffusion has a number of powerful features which allows you to do in-painting, whereby you can provide a base image and if wanted a mask (bits of the image you keep), and the rest is then modified by the prompt depending on the other values (strength, guidance, seed). Here I have provided a background image and a prompt of prince and princess posing for photo in sumptious room

Easy Diffusion with the Star Trek uniforms LoRA model applied to a photo of my wife Angela with a mask applied to keep her face in the final work


Various engines allow a start point of a seed image which can then be modified. Below is an original art work by me (in Acrylic) which was then processed by various engines with a prompt guide of hammerhead shark floating above a coral reef viewed from below



Model Variations

There are various models that you can use, beyond Stable Diffusion which have their own style of rendering, here we seed with an original photograph I took with my trusty Minolta 7D, and compare variations of this photo made by different AI models



An original chalk sketch I made, enhanced by Easy Diffusion

An original watercolour painting I made, enhanced by Easy Diffusion


An original chalk pastel sketch I made, enhanced by Easy Diffusion using dreamlike-photoreal-2.0 model and a specific LoRA models for each famous actress shown








An original photograph I took used as an initial image by Easy Diffusion (and others) applying various models and variations in the prompt









An original photograph I took while on holiday in Santorini used as an initial image by Easy Diffusion applying various models and variations in the Prompt






Back
Home