Coming soon and booking now, Mastering Animation for Poser 12 webinars with Charles Taylor. Also relevant for Poser 11, of course, it’s just that 11 users won’t have such swift SuperFly rendering or some of the new Cycles nodes. But that won’t matter much if you’re using Firefly, Sketch, Comic Book, or Preview PNG sequence + Photoshop filters/actions.
NVIDIA Omniverse has been released in open beta. In its current form it appears to be an extensible virtual production studio, giving teams the ability to… “simultaneously work together on projects with real-time photorealistic rendering” but also to “work concurrently between different software applications” via Omniverse Connectors which bridge into “leading” content creation software. Most interestingly, there is a promised Connector bridge to the free Blender in the near future. Naturally, your studio’s creatives all need to be brewing their wizardry on fast n’ shiny NVIDIA graphics cards and Windows.
The Omniverse platform is only in open beta at present, but already has several working modules within it. Including ‘Omniverse View’ for architects, and ‘Omniverse Create’ for designers and creators. It seems to use the Pixar USD format for universal ‘in-out porting’ of the 3D scenes and moving them around the various applications?
“Early next year” this virtual studio platform will see the release of…
“‘Omniverse Audio2Face’, AI-powered facial animation; and ‘Omniverse Machinima’ for GeForce RTX gamers”.
Machinima being the term for real-time WYSIWYG animation using a game-engine, and from the sound of it ‘Omniverse Machinima’ seems to be tilted toward Unreal Engine users and TV studios — rather than the hobbyist crowd that is currently using iClone.
The ‘Audio2Face’ module is more interesting and will aim to have an AI… “generate expressive facial animation from just an audio source” without any need for expensive and fiddly camera-based mo-cap. That makes a lot of sense. Train an AI to match millions of audio vocalisations with visual expressions, then have it generate expressions purely from audio. In fact I’m a bit surprised such a thing doesn’t already exist in software — beyond the existing ‘vocal audio to mouth phonemes’ lip-sync automation. But perhaps animating a full face and escaping from ‘the uncanny valley’ in real-time may need a Cloud connection and a zillion back-end NVIDIA GPUs to work? My guess is that you would need a second AI to weed out the “ugh, no… uncanny valley” results.
Anyway NVIDIA Omniverse looks good and may even be free(?), albeit after the entry-ticket price of a 30-series NVIDIA graphics card and (ugh) Windows 10. When it’s all polished up and hooked to a Blender bridge, that could make it very interesting for small indie animation studios. But what are the prospects for non-techie hobbyists? Well, DAZ is also an NVIDIA partner, so I guess if DAZ Studio implements a Pixar USD-format bridge then they could also enter the Omniverse?
A new Reallusion video “Hand Animation Solution from the Puppet Actor Toolkit pack for Cartoon Animator”. Control animations with your hand. Requires extra kit (rather than working from a HD webcam) and is obviously a bit difficult to control in real-time. Nor can you emulate a traditional “glove-puppet” experience, it seems. But it’s amazing it can be done at all, and as always Reallusion has pre-rigged templates for Cartoon Animator.
There’s a bit more specific news on Poser 12, and an official announcement of a release-window in the first week of November. So they’ve only slipped by a week, from the expected 31st October, which is pretty good for a big software package and a small team.
“The Windows version of [Poser 12] will be available to Early Access customers in the first week of November. The Mac version of Poser 12 will be released in early December.”
“Early Access” presumably being (this is just my guess) some sort of download sign-up gate for existing users, that may warn with something like… “I am an experienced Poser user and understand that this is a .0 release barely out of beta, and some features may be incomplete or missing, and old scripts may not work, and I promise not to moan and rant in the forums…” etc.
My further guess would be that an ‘all-comers’ public Windows version would be out and ready a week before Black Friday, which in 2020 is 27th November. It looks to me like the sensible in-depth reviewer should hold off until January or even February 2021, and ideally wait on the first patch.
Now there’s a fairly firm release-window, even if a bit staggered, I’d imagine that we should have the complete feature-list announced by next week?
On the public marketing, I do hope they have decent demo pictures/video for the Comic Book Mode this time. Smith Micro used to choose the most awful possible figure choices, such as the rather naff Barney with black hair and a grungy plaid shirt, or the horribly grungy Creature from the Black Lagoon. Show the public the free-with-Poser SAK Robo-kitty figure being tooned in relatively flat lighting, would be my suggestion, or the old Poser Star! figure might be an attractive toon choice, or some of the Nursoda figures. The free-with-Poser Maisie can also toon up surprisingly well and pleasingly, with a bit of work, once you abandon the photoreal in which she looks so ugly. She’s obviously not meant for photoreal.
What about the Python scripting language, which is vital to many scripts? The new post focuses on this and we get a version number: Python 3.7.9 is what will be in Poser 12, which will update from the old PoserPython 2.7 version. We also learn that the important EZSkin is being fixed for Poser 12 by the author, and that…
The dev team is working on a way to easily download, install and protect Python scripts that are purchased from the Renderosity Marketplace. We are also looking for vendor partners to create new scripts for Poser 12.
Interesting. You can already create a .PRC encrypted Python script, or at least you could in the previous version of Python and Poser. That’s not ideal, though, as it makes fixing old un-maintained scripts impossible.
So it sounds to me like Poser 12.1.x may have a built-in secure script ‘encrypter/decrypter’ module. I can see how that could encourage commercial $20 script-makers and thus boost the Renderosity Marketplace. It would also enable easy ‘unlocking’ by the owner of Poser, if the family of a deceased coder wished to give away the scripts for free, or sell them on to another developer.
Also, it’s now known there will be a “free trial up to 21 days”, as before with Poser 11. And… “We do not plan to invalidate Poser 11 licenses that are used to upgrade to Poser 12.” It sounds like you should be able to run both on the same PC, meaning you can try a trial version of 12 alongside an existing install of 11. That said, you should probably back up vital presets before installing 12.
Reallusion’s Cartoon Animator 4.3 Pipeline recent enabled sending of scenes via a script to Adobe After Affects (CC 2020, CC 2019, CC 2018, CS5, CS6). Perhaps as a follow-on, the latest After Effects has introduced a new “3D Design Space in After Effects”…
This looks very like the intuitive angled-layers view in Cartoon Animator, which should make the Animator 4.3 -> AE transition easier. Amazingly, it seems that After Effects couldn’t do this before now.
Cartoon Animator 4.3 Pipeline can now export your Project to Adobe After Effects. There’s a handy video on installing the script and sending the scene over. Nice. Now you can say to the animation snobs, “Oh yeah, it was made in After Effects!”
SIGGRAPH 2020 now has a handy Open Access page for public content. Spotted in the 2020 sections: using physics to make 3D fonts dance; deeply emotional talking heads; AI for lineart colorisation; stylization in realtime; and anime-style colorization. They’ve also usefully added collected links to public/open material from conferences back to 2015.
The DAZ Studio fur-and-hair plugin Look At My Hair 1.6 is currently on a 50% off deal at the DAZ Store. Its companion LAMH 2 iRay Catalyzer plugin is also 50% off at around $7. Note that the latter iRay addon only works with a few “LAMH models [creatures] compatible with the Catalyzer”, so it you want LAMH for some other use than DAZ LAMH-enabled creatures then the extra iRay addon may not be needed.
Just be warned that it’s well known that LAMH is extremely crash-prone, and it needs to be learned fully and worked in the correct way if you’re to try to avoid some of the crash points. I can confirm that it’s crash-prone, and that it will usually take down DAZ with it. Check the forums for advice and some possible workarounds for such common problems. One of the worst problems is when it crashes DAZ on trying to load a finished scene that was saved with LAMH hair in it — users might save two versions of such a file just in case, one with the preset applied and one with it removed from the scene.
That said, when LAMH can be made to load/render, a preset is simple to operate in terms of fur colour and density and it renders reasonably quickly in 3Delight. Very quickly in default lighting, less quickly in complex lighting. For instance here I have dear old DAZ Millennium Cat with a black cat texture MAT (Classic Cats pack?), a fairly tough ‘Caressed by Light’ light preset #05, and the LAMH short preset for the MilCat ramped up to 260,000 hairs. Even with the tough lighting and many hairs, this render is done in five or six minutes for me. And that’s with me not using the Xeon workstation, just the normal desktop PC.
3Delight. Raw render, no postwork.
Since version 1.5 LAMH has also offered easy export for Poser and Vue and even .OBJ…
New in 1.5: LAMH will now create optimized FiberHair for rendering in Poser and Vue … applies compression to the fibers to control the size of the exported hairs. … LAMH will include the UV’s and custom textures with the FiberHairs, providing a complete asset” for export. Also… “provides the option to write the FiberHairs to .OBJ format.
… and export is relatively straightforward (provided you can actually load the LAMH preset and pose, without DAZ crashing-to-desktop). Such exports alone may thus be worth the $25, when LAMH is on a 50% discount. Indeed, it may even enable iRay (I’ve yet to test that myself) as Kendall in the forums gives some possibly good advice re: taking LAMH FibreHair exports to iRay…
“Do not generate the FiberHair until right before render. [Because FibreHair doesn’t auto-follow the pose]. The FULL version’s FiberHair export will put the generated hairs DIRECTLY on the model in the DAZ Studio viewport.” I also read a little later that “It is no longer necessary to export out FiberHair specifically for iRay … LAMH takes care of determining the rendering engine and adjusting accordingly.
Nice, if that’s the case. And apparently FiberHair exports can also inherit the colouring of the base diffuse texture (.OBJ export of strands don’t).
And I guess that if one then intends to do a lot of iRay rendering with FiberHair, DAZ 126.96.36.199 is current minimum — as that was updated to include “NVIDIA iRay RTX 2020.0.0 (327300.2022)”. That’s important because the iRay devs reported in the early Spring that their new iRay… “2020.0 final has just been released” and strand fibers do “especially well” with the new iRay 2020 + an RTX graphics card. Even fibres with dense intersections do very well, they said. Thus I presume that DAZ running iRay 2020 should help with the speeds on LAMH strand hair, if you have the required graphics card type. Possibly even if you only do CPU rendering, though that’s another guess. According to the forums this ‘Speed’ setting, in particular, might also help…
The current public beta of DAZ Studio is at 188.8.131.52 for the very latest, which is what I’m now running on. As well as the addition of iRay 2020, in recent 4.12.1.x releases the technical Changelog notes several dForce version updates and .OBJ import/export improvements.
For people looking at fur options I should note that Poser 11 Pro has a Hair Room built in, in which basic fur is relatively easy to make and quick to render. I seem to recall that Hair Room fur colouration can automatically take up from the diffuse material’s pattern, if needed. But I hear that LAMH can also do that, though not for .OBJ exports.
There’s also the new DAZ Studio Strand-Based Hair Editor, which is similar but more stable than LAMH. But the problem there is… it has no cat presets! In the meanwhile, LAMH has excellent cat hair, both the old MilCat and the new Hivewire HouseCat. And actually, thinking about it… I have yet to see a single animal preset produced for the native DAZ strand hair. You’d have thought that, a year after release, we’d have fifty or more animals furred by now. Is the absence because such hair can only be saved if there’s a base mesh to ‘grow on’? And that mesh can’t be redistributed, as it’s part of the commercial model?
EBSynth has moved into beta. 10x faster, and batch auto re-naming of files when you drag-and-drop. Still free.
It’s style transfer for video frames. You first extract a still keyframe from a video, and give it a nice manual artistic paintover. Then you use the resulting painting as a style-source in EBSynth for processing all the other frames. Once done, the whole video clip should have taken on the same painterly style.
Obviously you have to work ‘per sequence’ of the video. For instance, you can’t just take a frame out of an exploding tropical volcano scene, overpaint it, and then also expect the same painted frame to work on the next scene… which may show James Bond in a speedboat racing over the sea.
Thus the way EBSynth works is a bit different than just running an entire video through an automated paint-emulation filter. One of the advantages may be, judging from the test footage, that the resulting ‘art emulated’ video is less flickery, depending on how wild your paintover was.
With a bit of careful work it seems it can also be used to remove or add wrinkles, and thus change age. It’s still at the “interesting tech-demos” and “light-show hippies getting freaky with footage of Terence McKenna” stage, but it’s one of several free and relatively easy style-transfer options worth keeping an eye on. Though it looks like we’re still a long way from “grab a Jack Kirby comics frame, apply the style to my basic lineart”.
There’s an official tutorial here (starts at 4:54, once the introductory guff is out the way).
So, what are we going to call all this semi-automated, generative, and AI-assisted graphics production? My vote would be for Assisted Graphics Intelligence, or AGI for short. To be pronounced Ag-eee, as when the name ‘Agnes’ is fondly shortened to a more familiar ‘Agee’. AGI also evokes both ‘AI’ and ‘agile’, and has a hint of magic and magi about it.
You remember those “whiteboard animation” videos, in which a hand super-quickly drew a sketch, words get laid down, while there’s a voiceover? They were the ‘hot new thing’ circa 2014, and generally now go by the name of “explainer videos”.
Production of them is now a Cloud or tightly Cloud-locked subscription thing, and there appears to be no desktop-only software worth having for making them quickly and easily. The leading $40+ a month names are Sparkol VideoScribe Pro and Easy Sketch Pro, among others. There’s obviously a lot of money in such services, and the Web is very intensively astro-turfed with page after page of spam and misleading marketing on such things. It’s almost impossible to find reliable information. Anyway, they exist, and the market leader VideoScribe has impressive capabilities, yet is fairly simple to use.
Their lustre has faded, as a media form. ‘Explainer videos’ were very hot in 2015-16 as we came out of the Great Recession, but they became over-used for mundane purposes — often purchased off the shelf for $20 from quickie providers in the back-streets of India, via Fivver. Such indifferent use has turned them into the humdrum Powerpoint slides of 2020. Meaning a superficially fancy presentation of fuzzy or half-baked ideas, done in a manner that’s then difficult to question or challenge. Thus making the format one that people now wince at, when they see it hove into view in a business meeting, teachers’ meeting or in a marketing context.
But that doesn’t have to be the case, and with a good story to tell and some creative flair they still have a place in education, especially for children and in lower-level work training. That set me wondering about how one would get a Poser line-art render to animate as if it was being drawn line-by-line by a human hand. Being able to output such a thing might be an attractive feature for Poser 12.
As first I wondered if Smith Micro’s MotionArtist could do this sort of “reveal a drawing effect”. Nope, seems not. Reallusion’s Cartoon Animator? Nope. You’d think that such a sales-worthy feature would be a natural one to add, but it’s not.
What about the vector tools? Surely there’s an Inkscape plugin? Nope. Clip Studio? Nope, seems not. Paint.NET? Nope.
Then I thought about Poser’s ability to output a Corel Painter script, for playback in Painter. But my in-depth look at that nearly forgotten Poser feature shows… that it does not actually lay down “strokes and lines”, whatever the end result may appear to be. Like I said above, a lot of this stuff is in the realm of “smoke and mirrors”, in terms of how it’s actually done vs. what it actually looks like.
But could a Python script in Poser go through a figure and selectively turn off the Geometric Edge line, stepping through the body parts according to a set “feet to head” list, and saving to each step to a movie-frame as it went along? That would give a certain effect, but it might look a little weird in terms of not looking “hand drawn” when played back. For instance, the long leg lines would be drawn in “all at once”. One would have to also have the script generate shaped white geometry at certain co-ordinates (placed in front of the leg) to prevent the complete line from being seen all at once. It’s a very complicated possibility, for someone with a few weeks to spare and ninja Python coding skillz, but it doesn’t seem likely to happen.
Perhaps then what’s needed is for Poser 12 to whip up some maths that saves Comic Book Preview edge-inking lineart to SVG, and have that SVG embed pseudo “pen stroke” information based on mesh names. For instance, tell the SVG that: this set of ink line come from the head mesh | therefore when drawing = reveal main head outline first | then eyes, nose, mouth | then neck | then reveal hair and hat lines. Or: this leg line is a long line from the leg mesh | therefore drawn it bit by bit | first to that on one side of the leg and then the other. It would probably still not look convincing.
The other way might be an AI that “knows” about the order of head, hands feet, eyes, etc. It would look at any vectorised Poser lineart and identify the body parts and it then “knows” the order in which a human would draw them, and how. It then saves a new SVG with that drawing information embedded it it. Clip Studio already has AI pose recognition, which transfers the pose from a photo to a 3d figure, so it’s not going to be impossible in future. It’s probably the future of this sort of thing, but it’s still some way off.
Alternatively, if you like the Poser Comic Book inking/rendering style and want to keep it, the best option can be simply to import your lineart to VideoScribe. It will actually automatically vectorize and you then choose the reveal style…
The bottom-left one is the best. It might appear that VideoScribe is preparing to draw ugly blodgy lines over your lovely lineart, but that is not the case. What it’s showing is where your lines will be revealed, not drawn.
What appeared to work best for this import was a 600px PNG with a plain background. On playback the hand and pen darts about all over, since there is no “order of laid-down lines” to follow, but if you set the draw time to sub 5-seconds then the flickeriness is not going to be too wearing on the audience (though a few may be on the floor having a flicker-induced epileptic fit). It’s also possible to remove the ‘drawing hand’ or just use a pen-nib instead. For reveal times, the best VideoScribe scene/video settings are said to be…
Another possibility is that you vectorise in Inkscape, set a new top layer, then quickly paint over it with a brush in an approximation of hand sketching. You then make these drawn lines fat enough to cover the lower drawing, and set the top layer to have an opacity of zero. When brought into VideoScribe, you can apparently tell the reveal “hand” to follow only the lines on the top layer, thus cunningly revealing the already-done layer beneath. That would given you a more realistic “drawing by hand” effect, on playback. Like I said, it’s all “smoke and mirrors” in this corner of graphics-world.
The last P3DO Pro update was December, with many new features added, and this new 2.8 is March/April 2020. Again, lots of new items added and other tweaks, on which the changelog is linked here. Much as I like PzDB as a Poser library manager, this new P3DO 2.8 has to be worth a mere $12.50 just to have as a backup. I believe it also works offline, whereas PzDB needs to ‘phone home’ occasionally, which may make P3DO useful for offline Poser creatives.
Curiously, the DuckDuckGo search-engine seems to heavily censor searches for the software (six results for “P3DO Explorer”, none relevant). Whereas Google Search is happy to provide 14,000 results. I assume that the word “p3do” is now on the Bing bad-words blacklist (DuckDuckGo is mostly Bing, with a bit of Yandex). Yes, it’s 2020 and advanced search-engines are still using clunky old keyword-based censorship. This is yet another reason why DuckDuckGo isn’t something you should use for serious searching, only for quick navigational searches and image searching (on which it’s actually quite good, mostly because it has less spam and fluff than Google Images).
Also, in other discount news, the main Reallusion Store has a coupon code for 50% off everything, also until 30th April 2020. Theoretically that brings the standalone conversion utility 3DXChange 7 Pro down to $99, which will be of interest even if you don’t want to get into the iClone ecosystem.
If you haven’t been following Reallusion closely, you can catch up with a handy new 3-minute Reallusion 2019 video roundup. It briskly showcases all the new ‘big’ content made available for sale in 2019, in both 3D and 2D. Note that there will also have been content from smaller makers, and last time I looked they had a separate store for such items.
The Bendies were first made for CrazyTalk Animator 3, then upgraded for Cartoon Animator 4 and its ‘360 head’ feature and new face puppeting. They also lack edge inking, which means webcomics artists could over-ink a bit to add their own look. They look excellent, and can be purchased individually at around $10-$12 each.
Worried about your fingers getting creaky? By the time they do, software will likely be stuffed with AI-enhanced voice assistants and the old mouse and keyboard may be gathering dust in a drawer. This is suggested by a new article from Maxon, maker of Cinema 4D, which profiles Kye Young and her use of the software. She Makes Comics in C4D Using Voice Controls…
“after developing degenerative arthritis in her fingers … she figured out how to use voice commands to control C4D, dramatically reducing the need for her mouse and keyboard.”