So, what are we going to call all this semi-automated, generative, and AI-assisted graphics production? My vote would be for Assisted Graphics Intelligence, or AGI for short. To be pronounced Ag-eee, as when the name ‘Agnes’ is fondly shortened to a more familiar ‘Agee’. AGI also evokes both ‘AI’ and ‘agile’, and has a hint of magic and magi about it.
You remember those “whiteboard animation” videos, in which a hand super-quickly drew a sketch, words get laid down, while there’s a voiceover? They were the ‘hot new thing’ circa 2014, and generally now go by the name of “explainer videos”.
Production of them is now a Cloud or tightly Cloud-locked subscription thing, and there appears to be no desktop-only software worth having for making them quickly and easily. The leading $40+ a month names are Sparkol VideoScribe Pro and Easy Sketch Pro, among others. There’s obviously a lot of money in such services, and the Web is very intensively astro-turfed with page after page of spam and misleading marketing on such things. It’s almost impossible to find reliable information. Anyway, they exist, and the market leader VideoScribe has impressive capabilities, yet is fairly simple to use.
Their lustre has faded, as a media form. ‘Explainer videos’ were very hot in 2015-16 as we came out of the Great Recession, but they became over-used for mundane purposes — often purchased off the shelf for $20 from quickie providers in the back-streets of India, via Fivver. Such indifferent use has turned them into the humdrum Powerpoint slides of 2020. Meaning a superficially fancy presentation of fuzzy or half-baked ideas, done in a manner that’s then difficult to question or challenge. Thus making the format one that people now wince at, when they see it hove into view in a business meeting, teachers’ meeting or in a marketing context.
But that doesn’t have to be the case, and with a good story to tell and some creative flair they still have a place in education, especially for children and in lower-level work training. That set me wondering about how one would get a Poser line-art render to animate as if it was being drawn line-by-line by a human hand. Being able to output such a thing might be an attractive feature for Poser 12.
As first I wondered if Smith Micro’s MotionArtist could do this sort of “reveal a drawing effect”. Nope, seems not. Reallusion’s Cartoon Animator? Nope. You’d think that such a sales-worthy feature would be a natural one to add, but it’s not.
What about the vector tools? Surely there’s an Inkscape plugin? Nope. Clip Studio? Nope, seems not. Paint.NET? Nope.
Then I thought about Poser’s ability to output a Corel Painter script, for playback in Painter. But my in-depth look at that nearly forgotten Poser feature shows… that it does not actually lay down “strokes and lines”, whatever the end result may appear to be. Like I said above, a lot of this stuff is in the realm of “smoke and mirrors”, in terms of how it’s actually done vs. what it actually looks like.
But could a Python script in Poser go through a figure and selectively turn off the Geometric Edge line, stepping through the body parts according to a set “feet to head” list, and saving to each step to a movie-frame as it went along? That would give a certain effect, but it might look a little weird in terms of not looking “hand drawn” when played back. For instance, the long leg lines would be drawn in “all at once”. One would have to also have the script generate shaped white geometry at certain co-ordinates (placed in front of the leg) to prevent the complete line from being seen all at once. It’s a very complicated possibility, for someone with a few weeks to spare and ninja Python coding skillz, but it doesn’t seem likely to happen.
Perhaps then what’s needed is for Poser 12 to whip up some maths that saves Comic Book Preview edge-inking lineart to SVG, and have that SVG embed pseudo “pen stroke” information based on mesh names. For instance, tell the SVG that: this set of ink line come from the head mesh | therefore when drawing = reveal main head outline first | then eyes, nose, mouth | then neck | then reveal hair and hat lines. Or: this leg line is a long line from the leg mesh | therefore drawn it bit by bit | first to that on one side of the leg and then the other. It would probably still not look convincing.
The other way might be an AI that “knows” about the order of head, hands feet, eyes, etc. It would look at any vectorised Poser lineart and identify the body parts and it then “knows” the order in which a human would draw them, and how. It then saves a new SVG with that drawing information embedded it it. Clip Studio already has AI pose recognition, which transfers the pose from a photo to a 3d figure, so it’s not going to be impossible in future. It’s probably the future of this sort of thing, but it’s still some way off.
Alternatively, if you like the Poser Comic Book inking/rendering style and want to keep it, the best option can be simply to import your lineart to VideoScribe. It will actually automatically vectorize and you then choose the reveal style…
The bottom-left one is the best. It might appear that VideoScribe is preparing to draw ugly blodgy lines over your lovely lineart, but that is not the case. What it’s showing is where your lines will be revealed, not drawn.
What appeared to work best for this import was a 600px PNG with a plain background. On playback the hand and pen darts about all over, since there is no “order of laid-down lines” to follow, but if you set the draw time to sub 5-seconds then the flickeriness is not going to be too wearing on the audience (though a few may be on the floor having a flicker-induced epileptic fit). It’s also possible to remove the ‘drawing hand’ or just use a pen-nib instead. For reveal times, the best VideoScribe scene/video settings are said to be…
Another possibility is that you vectorise in Inkscape, set a new top layer, then quickly paint over it with a brush in an approximation of hand sketching. You then make these drawn lines fat enough to cover the lower drawing, and set the top layer to have an opacity of zero. When brought into VideoScribe, you can apparently tell the reveal “hand” to follow only the lines on the top layer, thus cunningly revealing the already-done layer beneath. That would given you a more realistic “drawing by hand” effect, on playback. Like I said, it’s all “smoke and mirrors” in this corner of graphics-world.
The last P3DO Pro update was December, with many new features added, and this new 2.8 is March/April 2020. Again, lots of new items added and other tweaks, on which the changelog is linked here. Much as I like PzDB as a Poser library manager, this new P3DO 2.8 has to be worth a mere $12.50 just to have as a backup. I believe it also works offline, whereas PzDB needs to ‘phone home’ occasionally, which may make P3DO useful for offline Poser creatives.
Curiously, the DuckDuckGo search-engine seems to heavily censor searches for the software (six results for “P3DO Explorer”, none relevant). Whereas Google Search is happy to provide 14,000 results. I assume that the word “p3do” is now on the Bing bad-words blacklist (DuckDuckGo is mostly Bing, with a bit of Yandex). Yes, it’s 2020 and advanced search-engines are still using clunky old keyword-based censorship. This is yet another reason why DuckDuckGo isn’t something you should use for serious searching, only for quick navigational searches and image searching (on which it’s actually quite good, mostly because it has less spam and fluff than Google Images).
Also, in other discount news, the main Reallusion Store has a coupon code for 50% off everything, also until 30th April 2020. Theoretically that brings the standalone conversion utility 3DXChange 7 Pro down to $99, which will be of interest even if you don’t want to get into the iClone ecosystem.
If you haven’t been following Reallusion closely, you can catch up with a handy new 3-minute Reallusion 2019 video roundup. It briskly showcases all the new ‘big’ content made available for sale in 2019, in both 3D and 2D. Note that there will also have been content from smaller makers, and last time I looked they had a separate store for such items.
The Bendies were first made for CrazyTalk Animator 3, then upgraded for Cartoon Animator 4 and its ‘360 head’ feature and new face puppeting. They also lack edge inking, which means webcomics artists could over-ink a bit to add their own look. They look excellent, and can be purchased individually at around $10-$12 each.
Worried about your fingers getting creaky? By the time they do, software will likely be stuffed with AI-enhanced voice assistants and the old mouse and keyboard may be gathering dust in a drawer. This is suggested by a new article from Maxon, maker of Cinema 4D, which profiles Kye Young and her use of the software. She Makes Comics in C4D Using Voice Controls…
“after developing degenerative arthritis in her fingers … she figured out how to use voice commands to control C4D, dramatically reducing the need for her mouse and keyboard.”
Take the vocaloid 3D characters from MikuMikuDance to Blender, via what appears to be a free set of MMD Tools. Now updated for Blender 2.8x.
As you might expect it’s a little complex, but there’s a detailed tutorial video on the install and conversion for Blender 2.8. Apparently the dance motions are also converted.
I’m not sure quite why you’d want them in Blender rather than their native MMD software, but the tools are being mentioned here because one could probably learn something about building and texturing toon figures by examining one of these inside Blender. Perhaps it’s also possible to give them a makeover and then convert them back the other way, and thus create new non-standard MMD characters?
Hopefully it’s not all just to remove the clothes and see them dance in the nude as A.I.-driven webcam girlz.
Google has released the free BodyPix 2.0. This offers automatic identification of people against a relatively noisy background, and then spots and tracks each person’s twenty-four body parts. It then segments, ID’s and colours each body-part. It can do this even while being fed around 20-25 frames per second, on fairly standard hardware such as an iPhone.
Version 2.0 adds “multi-person support and improved accuracy”.
They also offer the sister-software PoseNet, enabling a basic emulation of what a Kinect does but via standard Webcams…
both BodyPix and PoseNet can be used without installation and just a few lines of code. You don’t need any specialized lenses to use these models — they work with any basic webcam or mobile camera. And finally users can access these applications by just opening a url. Since all computing is done on device, the data stays private. For all these reasons, we think BodyPix is easily accessible as a tool for artists, creative coders, and those new to programming.
So… how to plug this stuff into a nice little DAZ/Poser-friendly Webcam utility? One that, at the flick of a drop-down menu, will happily real-time puppet and animate any stock figure from an Aiko 3 up to a G8 or La Femme?
Oh dear, Adobe wants to do talking cats. Their “Project Sweet Talk” aims to go head-to-head with the existing and very mature CrazyTalk from Reallusion. In a demo this week the only difference appears to be auto-identification of where the mouth and eyebrows are…
“The [Adobe] system works with any unprepared image, identifying and deforming facial features to generate mouth shapes and, to a lesser extent, eye and brow movements matching the audio file.”
But if CrazyTalk doesn’t already precisely auto-select the lips and eyes, then it’s a feature that can’t be too far off. How difficult is it, anyway, to just manually add a half-dozen control points to help guide the software? It’s part of the fun, especially for kiddie-oriented software.
Google has just open sourced its PoseNet 2.0 pose-detection magic. Which suggests we might get a simple affordable “video to Poser/DAZ pose preset” software, in due course. Without having to stick day-glo markers over clothing and faces. I don’t know of any such thing currently, that’s markerless and sub $50 and works without an enormously expensive iPhone or similar kit.
Apparently Disney also has markerless motion-capture for faces that focuses on the jaw. It detects skin deformations around the jaw, as a proxy of jaw bone position.
Amid the wave of new releases in recent weeks, good advice from Ricky (content editor at Renderosity) on “Focusing on the Tools at Hand”…
“I’m seeing a lot of animators jump from tool to tool as new ones are released. It’s no longer uncommon for there to be several tools that basically do the same thing installed on one’s system. … [creatives suffer from] informational overload when it comes to tools. Our inboxes are stuffed with announcements and there are getting to be so many vendors at the various conferences and expos that its difficult to see all them.”
Very true. I see a lot of CG news, though not being in the USA I don’t get to the trade expos to pound the floors and see the launches. But the wash of CG news gets filtered before it reaches this blog. The bits you read here are only those that make it through the filter of being somehow relevant to Poser / DAZ / digital landscapists, or to digital comics creation, or to fantasy/sci-fi artists. Thus when Lightwave gets sold, that’s of mild interest here because it interfaces nicely with Poser. Plus, the CG news often gets questioned, tested or investigated before it’s posted — it’s not just ‘link the press release’.
Luckily I’m also somewhat constrained, in that I now… i) avoid nearly all ‘animation’ (fun to watch, not fun to make) and am trying to build up to a set of skills in comics making instead (less work, quicker rendering, a bit more fun); and ii) not being able to afford the uber-PC and ninja £600 graphics-card to run everything new and shiny. Several recent bits of demo or review software have even refused to install (Substance and AI Gigipixel) because my PC was deemed under-par. Not even an install!
Ricky suggests in his article that we should stop and think if software we already have duplicates the features of the shiny new software. Again, good advice, but sometimes you also want to support software that’s more open. For instance I was enamoured of Sketchbook Pro for about 18 months, in my slow move toward finding time for 2D painting/overpainting. But now I want to support the similar but open-source Krita 4.x. That Sketchbook Pro has now slipped back from being supposedly ‘free’ to having a paid version, and within a year of Autodesk’s ‘free’ announcement, seems to partly vindicate my choice.
Anyway, for what it’s worth, prompted by Ricky here’s my list of currently frequently-used software on Windows:
Poser 11.x Pro. REVIEW COPY (no expiry).
DAZ Studio 4.x (with Scene Optimizer for iRay). FREE.
Carrara 8.5. PAID.
PzDB (for Poser/DAZ content management and selection). GIFT FROM THE MAKER (for helping with v.1.3).
3DXChange (3D file converter). WON IN A CONTEST, with iClone 7.
Meshlab 2016 (3D file wrangler). FREE.
Vue xStream 2016 R4. REVIEW COPY (no expiry).
Flowscape 1.2. PAID.
PTGui 8 (panorama stitcher). PAID.
Krita 4.x (2D digital paint). FREE.
Dynamic Auto-Painter 6.x (aka DAP 6). PAID.
Photoshop, mostly CS6 – plus a half dozen plugins, mostly tried and tested old-school ones. PAID/FREE.
IrfanView for quick image previewing and basic processing in Windows. FREE.
FastStone Capture for screenshots. FREE.
+ Ugee 1910b ‘draw on the screen’ pen monitor + main 24″ monitor. PAID, but both cheap Amazon ‘warehouse deals’.
You’ll notice iClone is missing. Many long-standing readers will know I used to be a big fan and user of it, but then they changed the UI wholesale and more. I did win a copy of iClone 7 recently in a contest, installed it and was glad because I gained the latest 3DXChange utility, but… I just don’t tend to use iClone itself any-more, these days. I’m far more fond of their fine CrazyTalk Animator and its potential for rapid comics production, these days.
Back to Ricky’s article. Perhaps we need a big ‘decision tree’ flow-chart, to help in choosing the right software for the task?
Autodesk is now shipping the big daddy of 3D software, in a new 2020 version. The features I noted, among the list…
* new non-phorealistic shaders for NPR … including “Toon Width” and a rather naff-looking “Halftone”.
* greenscreen keying tools.
* point-cloud export now in .ply and .e57 file formats.
* faster viewport for complex rigs and animation preview.
* faster UV editing “when working with many UV islands”.
* “standard codecs when rendering AVI files”.
“Toon Width” has an un-parsable specification, but when I attempt to translate it approximately into plain English… it looks like it’ll automatically make toon lines thinner the further away an object is from the camera. Actually, that sort of thing would be a nice addition to Poser 12’s comic book capabilities, I’d suggest.
Unreal Engine | Epic MegaGrants. $100-million, apparently. Applicants don’t have to have a game… “Anything in UE4 or relating to UE4 is eligible.” Or even something made in Unreal… “If your project is built in another engine or toolset and you want to move it to UE4, you are eligible to apply for an Epic MegaGrant”. It’s not restricted to the U.S…. “If we can legally make payment to you, you are eligible.”
Some recent semi-automatica from Japan. For animation, of a sort, but also with obvious use for comics makers who only need slightly different variants between comic frames.
1. Live2D Euclid 1.0. Illustrated 2D characters in 3D space, seemingly auto animated (once you have the character set up)…
Their less turn-tastical but more polished version of this is their Live2D Cubism 3.0 software. 3.0 appeared in 2017, and it’s now at 3.3. As with Euclid you also feed it a multi-layer 2D .PSD file from Photoshop, but with Cubism you can only set up relatively subtle camera-facing animations. Looks interesting, and there are templates to base your new characters off…
Sadly the software is a monthly subscription, but reasonable at around $10 per month. There’s a free trial for Windows and Mac, with translated UI, and an English manual. It’s interesting to know that this software is out there. But without looking at it too deeply I’d suspect that the latest CrazyTalk Animator (soon to be Cartoon Animator 4.0) would be feature-comparable and possibly easier to use. Though possibly more expensive if Cubism has a thriving hinterland of low-cost third-party animation and template packs over in Japan.
2. PaintsTransfer. AI-assisted auto-painting of line art. The user first places and adjusts ‘wheels’ over the line art, then indicates general colours at the centres of these. A first approximation of the colouring is tested, and then if the auto-colour is broadly acceptable the user refines it by placing further colour dots onto the wheels. The code has been released, but it’s not for Windows.
Again it’s interesting, but Krita 4.0 seems to be the most vigorously-developed choice for auto-colouring of line-art at present. Note that the free Krita also has the ability to auto-colour by greyscale value (e.g. lighter tones become skin-pink)…
3. Anime generation with AI, in a recent conference presentation. Give it three keyframes, have the AI intelligently interpolate the animations in between, to generate 16 flowing frames.
A glimpse at the future of semi-automated AI-assisted workflows! Next stop, 3D strand ‘autohair’ from a photo…
Reallusion now has a release date for their excellent 2D animation software CrazyTalk Animator. Thankfully there’s no mention of a total UI makeover, and the new features do look rather nice. I’m mostly interested in this software not for animation, but for its potential in rapid 2D comics production.
The re-named Cartoon Animator 4 will be released in April 2019 with…
* Smart IK for character joints.
* a “360 Head Creator”, aka “3D Head Creator”.
* Live performance capture (webcam head/facial mocap for 2D toons).
* Real-time lip-sync, for voice actors -> characters.
Of these, the 3D-ish “360 Head Creator” seems most interesting for static comics production, in terms of differentiating it from what you could do natively in Photoshop…
“Quickly Transform a 2D Face into a 3D Head. 3D Head Creator transforms 2D art into 3D styled characters with up to 360 degree of motion for deeply rich performances. Photoshop round-trip integration for editing multi-angle character in and out of 3D Head Creator.”
So, there’s no 3D object mesh generation from flat 2D ‘line-art and colours’ assets, which would be a theoretical marvel of technology that probably needs AI assistance. If it could be done at all.
Instead it looks more like a pseudo 2.5D, or a bit like what the isometric RPG game-makers do when they use a script to quickly output a series of 2D turn-table renders for subsequent game animation of sprites.
Along with the new feature will come…
“360 animation controls and timeline editing” and “New generation of 360 creative assets”.
It’ll be interesting to see video of it in operation, and discover how laborious the set-up is and how much is automated with a few clicks. And how easily Poser’s Comic Book Preview mode might produce the 2.5D character views needed.
For now, take a look at the new page which explains it all and has screenshots and details.