/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality

Hunt Down the Janny


Which boards will be kept?

Days left: 73


JulayWorld fallback document - SAVE LOCALLY

JulayWorld onion service: bhlnasxdkbaoxf4gtpbhavref7l2j3bwooes77hqcacxztkindztzrad.onion

Max message length: 32768

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Welcome to /robowaifu/! We're a SFW board with just a couple of simple rules.


R&D General Robowaifu Technician 09/10/2019 (Tue) 06:58:26 No.83
This is a thread to discuss smaller waifu building problems, solutions, proposals and questions that don't warrant a thread. Keep it technical. I'll start.

Liquid battery and cooling in one
Having a single "artificial blood" system for liquid cooling and power storage would eliminate the need for a vulnerable solid state battery, eliminate the need for a separate cooling system, and solve the problem of extending those systems to extremities.
I have heard of flow batteries, you'd just need to use a pair of liquids that's safe enough and not too sensitive to changes in temperature.
This one looks like it fits the bill. The downside is that your waifu would essentially be running on herbicide. (though from what I gather, it's in soluble salt form and thus less dangerous than the usual variety)
https://www.seas.harvard.edu/news/2017/02/long-lasting-flow-battery-could-run-for-more-than-decade-with-minimum-upkeep

How close are we to creating artificial muscles? And what's the second best option?
Muscles are perfect at what they do; they're powerful, compact, efficient, they carry their own weight, they aren't dependent on remote parts of the system, they can be controlled precisely, and they can perform many roles depending on their layout alone.
We could grow actual organic muscles for this purpose already but that's just fucking gross, and you'd need a lot of extra bloat to maintain them.
What we need are strands of whatever that can contract using electrical energy. Piezo does the trick at small scales, but would it be enough to match the real thing? There have been attempts, but nothing concrete so far.
What are some examples of technology that one could currently use instead?

High level and low level intelligence emulation
I've noticed a pattern in programs that emulate other computing hardware.
The first emulators that do the job at acceptable speeds are always the ones that use hacks and shortcuts to get the job done.
It comes down to a tradeoff. Analyzing and recompiling or reinterpreting the code itself on a more abstract level will introduce errors, but it is a magnitude of order more efficient than simulating every part of the circuitry down to each cycle. This is why a relatively high level emulator of a 6th gen video game console has close system requirements to a cycle-accurate emulator of the SNES.
Now, I want to present an analogy here. If training neural networks for every damn thing and trying to blindly replicate an organic system is akin to accurately emulating every logic gate in a circuit, what are some shortcuts we could take?
It is commonly repeated that a human brain has immense computing power, but this assumption is based just on the amount of neurons observed, and it's likely that most of them probably have nothing to do with intelligence or consciousness. If we trim those, the estimated computing power would drop to a more reasonable level. In addition, our computers just aren't built for doing things like neural systems do. They're better at some things, and worse at others. If we can do something in a digital way instead of trying to simulate an analog circuit doing the same thing, that's more computing power that we could save, possibly bridging the gap way earlier than we expected to.
The most obvious way to handle this would be doing as many mundane processing and hardware control tasks as possible in an optimized, digital way, and then using a GPU or another kind of circuit altogether to handle the magical "frontal lobe" part, so to speak.
Wear and maintenance
What would you do if your wife accidentally cuts her skin, or rubs it away? You could partition the skin into replaceable "plates", but it would be nice to have a substance that you could just paint over the damage with and let it dry, at least for smaller scratches. It could also be used to cover up the seams.
What about internals? You might have to replace the inner lining of the mouth and uh, other human interface cavities once in a while. I don't have any ideas for those yet, perhaps something that binds when exposed to water, as opposed to the skin thing which would do better if it reacted to air.
How do you refill liquids? Using water-soluble chemicals only for everything would be ideal, because replacing, filtering and removing excess water is quite trivial. Self-cleaning is important as well, that's another use for water.
An additional port for providing the raw chemicals for dissolving might be necessary. I would place it at the navel or at the tailbone. If it was the latter, it might function as an extension and charging port as well. Wouldn't it be nice to have a detachable tail containing an antenna or additional interfaces?

Sanitation
When liquids are involved in any capacity, you must consider the possibility of nasty things growing in said liquid (microbes, mold). Especially the ones that'll inevitably hop over from your own filthy monkey hide. Adding some biocide to fluids might be necessary, though that may be harmful to the user as well. You need to be very careful with it.
Other things that could help are; an internal temperature that's unfriendly to microorganisms (like a permanent fever, which might also feel quite pleasant to the user), and frequent removal of old fluids. If the water in circulation acts as a coolant (see first post), we wouldn't even have to go out of our way to heat it up. Your own PC's processors easily reach temperatures needed to sterilize any liquid.
Open file (244.79 KB 620x409 rubbertubeimagees.png)
Open file (196.20 KB 1024x802 f1llfc3f5y3kyw6-large.jpg)
Open file (112.38 KB 672x837 299688_ts.jpg)
So I've been thinking of ways to manufacture cheap hydraulic muscles since weak pneumatic ones cost almost $100 for only 40 lbs of force. What a joke! But the answer seems simple: just make the woven sleeves out of strong nylon fishing line. Would it work?

Obviously making them by hand will be a ton of work just to make one sleeve, but once an initial prototype is tested and it works well then a 3D printable machine could be built to automate weaving fishing line into sleeves. The strength of fishing line is tremendous and it's cheap to buy. I estimate it would be able to withstand pressures up to 3500 psi, generating up to 2000 N of force. It'd be an open-source solution for powering our waifubots to give the Chinese merchants the middle finger.

A silent hydraulic system could also be built with quiet actuators for low cost rather than using a noisy pump. The only issue I see with this is that hydraulics can get extremely hot, up to 82°C before the system starts to breakdown. This heat could be dissipated though via a large heatsink on a thermoelectric generator using a diaphragm and artificial lungs. Our robowaifus would be able to exhaust excess heat by breathing and panting.

Some videos on hydraulic muscles:
https://www.youtube.com/watch?v=NDQlOqsr84s
https://www.youtube.com/watch?v=Cy9uaUxVNoI
https://www.youtube.com/watch?v=a6mRhuR_g-E
https://www.youtube.com/watch?v=c14AzY5dCnw
>>1627
Interesting idea about weaving together nylon fishing line anon, good thinking! Seems obvious now but I admit I hadn't thought of it yet. Maybe we can find some good manufacturing designs or tech, that can be used to both weave and twist the strands simultaneously? That might be a good electrically-driven muscle approach.

>A silent hydraulic system could also be built
While very desirable, I'm not sure how that would work exactly. Can you elaborate? Also, have improvements in hydraulics happened yet to make it more amenable to use inside a robowaifu? You know, the toxicity and maintenance hassle? It would be great if it becomes practical and cheap enough some day.

>Our robowaifus would be able to exhaust excess heat by breathing and panting.
I honestly wouldn't even mind her having strategically-placed vents like in:
>>>/loomis/172
and following.
Open file (100.55 KB 700x525 sunfloweroil.jpg)
Open file (169.15 KB 1002x582 developmento.jpg)
>>1628
Vegetable oil can be used as hydraulic fluid. What makes artificial muscles different from the hydraulics of heavy machinery is that you don't have to pump a lot of fluid around or have a huge reservoir. The muscle can remain mostly full. It's the pressure that contracts it.

At 3500 psi hydraulic fluid is compressed by 0.5% so you only need to pump a tiny bit of fluid into the muscle at that pressure to get a maximum contraction. Sunflower oil apparently has even lower compressibility than hydraulic fluid, which should make it more efficient with less energy being lost as heat.
https://pubs.rsc.org/en/content/articlelanding/2011/gc/c0gc00597e

From looking around at experimental results it seems larger muscles use much less pressure but more fluid to exert the same force. Tiny muscles exerting 2 kN of force is pretty cool but I don't think anyone is going to wanna be near their robowaifu when a hose bursts at 3500 psi. We'll have to go without battle waifus and use larger muscles with safe psi levels.

How would we even get enough power for robowaifus to lift heavy objects? Join an array of 40V chainsaw batteries in parallel?
Open file (206.95 KB 500x501 doggo.png)
>>1631
That's great news about sunflower oil anon, thanks. I had no idea. I'll mark the toxicity issue off the list then. :^)

We'll still need reservoirs, fittings, valves and pumps that are both inexpensive and can take the literal pressure. Maybe further developments in materials will be available to us.

As far as delivering bursts of high current, it will be a challenge needing a solution regardless whether we go with hydraulics, electrical, or some type of hybrid. I think atp we have to assume she will be spending lots of time sitting in her 'recharging chair' at least for the first few versions.

The heat dissipation is probably a bigger Systems Engineering component challenge than any of us fully realize yet; what with all the energy being stored, transformed, and moved around and everything.

Makes me sit in awe again at the wonder of God's designs for biology tbh. It all just works so remarkably well together.
>>1632
I think atp we have to assume she will be spending lots of time sitting in her 'recharging chair'
There can be swappable battery packs, air tanks or my favorite option at the moment have it connected directly to a power source.

The low power density of energy storage combined with high cost and complexity makes an autonomous design impractical at the moment. I envision 2 tubes being plugged into mine; one for hot water to circulate under the silicon skin covering and one for air to power the pneumatics. That way I can keep it in my bed as a giant warm body pillow even when it's turned off then plug in an air hose when I want it to move.
>>1635
Fair enough. ATP you may be right. Certainly most of the low-budget lab work seen in videos has the robot tethered to external equipment.
Open file (385.89 KB 1930x925 DSC_0139.JPG)
>>1627
First attempt at winding fishing line into a sleeve for artificial muscle. This was a lot easier to do than I thought it'd be. I just stuck two pins into a piece of doweling and wrapped one piece of fishing line around back and forth in different directions crisscrossing it.

I need to figure out a way to keep the ends clean so it wraps even and a way to melt the ends together without burning the plastic. A machine could definitely do this better and faster than a person. I'm not sure how I'm gonna put attachments on the ends yet. I'll have to buy some parts for that but I'm pretty sure we'll be able to manufacture all the cheap artificial muscle we need.
Open file (335.91 KB 620x506 AMT-mooving.gif)
>>1688
There is a machine you can make for automatic winding if you have access to a 3d printer;
http://iskanderus.ru/amt-core/
https://www.youtube.com/watch?v=iMMGfzYXwAU
Also are you using UHMWPE fishing wire? That's what the 3d printing community used before switching to belts.
>>1691
Well that saves a fuckton of work. Thanks for the link. The only trick is it has to be a woven into a sleeve like a Chinese finger trap. I have to figure out a way to wind multiple lines at a time without messing them up.

And it's just some cheap 30-lbs nylon fishing line I had lying around. Nylon should be able to hold up to 3000 psi but I'm only going to be operating the muscles up to 80-100 psi.

I've been doing some reading on different materials and UHMWPE is stronger but it stretches twice as much as nylon which is no good for hydraulics. Kevlar has extreme resistance to stretching (50x more than nylon) and 100-lb Kevlar thread is only about 3x more expensive than fishing line. We'll have to experiment and see what works best.
>>1692
>UHMWPE is stronger but it stretches twice as much as nylon
That depends if you're using high quality heat treated spun fiber or cheap monofilament extruded wire. I haven't done too much research into this field but the reprap community usually goes with specific materials for good reasons and it's been replacing kevlar fibers in body armor for years now.

>I have to figure out a way to wind multiple lines at a time without messing them up.
You could add a looming mechanism with several spools. That machine wasn't intended to make sheaths for hydraulic muscles but it'll do the job with some alterations. This thing is the best I could find on thingiverse for weaving.
https://www.thingiverse.com/thing:151798
>>1691
Neat, thanks anon. I've been wondering if there were some DIY designs out there already for this problem of sleeve and other 'fabric' weaving. We'll have 101 needs in a full-featured robowaifu design for these things.
>>1698
>This thing is the best I could find on thingiverse for weaving.
Not that anon, but very cool. Thanks.
Open file (124.92 KB 1461x600 DSC_0146.JPG)
I just realized I have to weave all the lines in one pass. The lines are suppose to pass under, over, and under again.

A machine that manufactures braided tubing:
https://www.youtube.com/watch?v=vvNaW8WVwP8

>>1698
Inkle looming our own belts will save a lot of money.

TIT used kevlar braided tubing for their muscles but it costs a fortune to buy it.
Open file (30.36 KB 317x436 weave.jpg)
Got no clue what I'm doing but it's gotta be something like this.
>>1702
>Inkle
>TIT
Help newfags out with the lingo?
>>1705
The inkle loom linked is a type of loom for making belts, bands and bracelets. TIT is the Tokyo Institute of Technology.
>>1706
Got it thanks. Sorry to be nuisance, but if it's for robowaifus then it's interesting and I want to understand it better. How can this loom help robowaifus anon?
Open file (1.06 MB 1055x523 tablet-weaving-4.png)
Lapis anon here, I used tablet weaving last year to make /miku/ themed belts so I know how to tablet weave now (at least in a way, there are many ways to do it)

>>1698
This thing looks like a holder stand, it simply makes weaving by hand easier, but it doesn't weave for you so its not a "machine"
I can also confirm the bulletproof 3dprints, but its a fairly recent thing; https://interestingengineering.com/researchers-3d-print-bulletproof-plastic-layered-cubes

>>1702
Weaving is more work than I thought and it also costs in materials, but its nice for making custom things from one's own designs

I guess this is something like what you guys are working on; https://advances.sciencemag.org/content/3/1/e1600327
There are many ways to make woven muscles
Open file (39.73 KB 390x576 cocona.jpg)
>>1707
You can place motors inside limbs or other places and use a belt to transfer the mechanical energy to joints. Cocona Kosaka has belts to drive her joints so she doesn't need large heavy motors in them.

Parts are expensive, especially to get custom sized, so we're trying to manufacture as much as we can on our own.
>>1711
OK, that makes sense. Can something like a timing belt for a car engine be used? Just trying to spitball here since I don't understand weaving very well but know cars a little.
>>1712
Yeah, timing belts are most efficient. They're mostly used for shoulders, elbows and knees. You can buy them but you'll have to design around the length and size available. It's not uncommon to use wires. No one here really knows what's the best approach yet.
Open file (56.48 KB 794x1123 8293241527437685097.jpg)
>>1707
This diagram shows how hydraulic muscles work. The weave contracts the bladder when air or fluid is pumped in.

It might be possible to buy double-end style towing sock cables made of the material you want rather than making your own.
https://en.wikipedia.org/wiki/Towing_sock

>>1715
Not so sure about the efficiency of belts compared to a synchromesh drive systems for use in robotics. Asides from taking up less space they can move in all directions, they're used in higher end 3d printers because the tension doesn't need to be adjusted which means better reliability in prints.
>>1715
>>1716
OK, thanks for the information anons. I understand it a little better now.
>synchromesh drive systems for use in robotics
I assume you mean something like this. I've never heard of it before.
https://www.sdp-si.com/products/Timing-Belts-and-Cables/Synchromesh-Drive-Systems.php
http://www.roboticsbible.com/robot-drive-systems.html
http://www.robotbasics.com/robot-drive-system
>>1716
I'm assuming that when the sleeve balloons is when force is being exerted into the armature to change it's shape?
Open file (31.88 KB 880x374 DSC_0132.JPG)
It's all ogre, bros. How will I look my robowaifu in the eyes when she realizes she has origami muscles because I'm too much of a brainlet?
>>1692
>>1736
Why are you all trying to make the sleeving? Why not just use something like this:
https://www.mcmaster.com/2573K48

Although I think with this you'd probably rather have the silicone coating on the inside but as long as you don't cut them too long it wouldn't be hard to invert.
Barring that buy some expandable sleeving and run an inflatable tube through it.

Although truthfully as fun as the artificial muscle stuff is I think it's mostly a dead end outside of perhaps some "extra" features. Since it's so complex to control.
Open file (14.33 KB 300x207 10320183.jpg)
Open file (129.07 KB 670x1200 10320183a.jpg)
>>1719
Think of it as a finger trap, when pushed from the inside by air pressure the ends contract towards each other.

>>1738
The type of weave it uses may be unsuitable or it won't contract much when the tube is filled under pressure and that thick silicone rubber might require a lot of pressure. We won't know until some tests are done.

I've also looked at some braided tubing and the smallest I've found is for model making, not sure what the outer sheath is made of or if the weave/vinyl tubing it uses would work.
https://www.1999.co.jp/eng/10320183
>>1738
>buy some expandable sleeving and run an inflatable tube through it.
That's an interesting idea. But I imagine that basically it'd be comparable in price and also far less of a pain to just buy it already pre-composited together. Still, inventive thinking anon.
>>1739
Very nice hobby outlet anon, thanks for the link.
Found an interesting web page on the reprap forums that has a guide for making air muscles. There are other pages on robotic topics there as well.
https://www.imagesco.com/articles/airmuscle/AirMuscleDescription01.html

>When the internal bladder is pressurized it expands and pushes against the inside of braided mesh sleeve, forcing the diameter of the braided mesh to expand. The physical characteristic of the mesh sleeve is that it contracts in proportion to the degree its diameter is forced to increase. This produces the contractive force of the air muscle.
So the type of weave on the braided mesh doesn't seem to matter? This other research paper on the topic brings up more questions so I probably won't get a good idea until tests are done first hand.
https://www.knowledge-share.eu/en/patent/bi-directional-pneumatic-braided-muscle-actuator/

Another site on this topic with an interesting solution to combat wear;
>A significant improvement to these devices has been made by the Festo Corporation, which has recently introduced a new product called fluidic muscle. This operates on the same principles as a standard BPA, with the critical difference being that the fiber mesh is impregnated inside the expandable bladder. The resulting actuators have a demonstrated fatigue life on the order of 10,000,000 cycles at high pressure.
https://en.wikibooks.org/wiki/Robotics/Components/Actuation_Devices/Air_muscle#Braided_pneumatic_actuator
>>1745
Thanks for that imagesco air muscle article anon, it's a very simple and useful approach. I made an archive of all the article pages to preserve it from disappearing on us. Grab it if you care to.
https://files.catbox.moe/sqjys1.gz
>>1746
There was already a pdf of it but I didn't post it as the web page has links at the end which might be useful for research and there are other topics on the site. It's air-muscle-information-sheet-am-02l.pdf

And instead of sleeping I've been looking into the braided sheath question. Seems that asides for the angle being important there's no good way to model the behavior accurately for simulations Modelling_of_the_McKibben_artificial_muscle.pdf also pleated braided sheaths that can increase the contraction force seem much more difficult to manufacture but some very good ideas are presented in the thirdgenPPAM.pdf

And lastly there's frobt-05-00136.pdf which claims to have typical air muscles beaten on almost all fronts by using an origami folding approach.
>>1751
Thanks anon. Please always dump any pertinent docs you may have here. Hopefully /robowaifu/ can become a clearing-house of sorts both for ourselves and for others in the future. I'm doing everything I know how to do to ensure we keep everything preserved in the event of future takedowns or other disruptions.

Question: I read in the promo article from the Pisa group that the angle of the weave was somehow variable. At least they implied it could control stiffness if adjusted. One thing they didn't make clear however if that was something that can be controlled dynamically, or if it has to be designed and placed into the material at manufacturing. Got any insights there?
>>1710
Hey Lapis Anon, there's a post about wires that themselves contract when heated. Thought I'd try to catch you here about it.
>>1747
>>1752
>>1755

Also, I hope you can tell us more about how weaving can be used for constructing robowaifus. Not for the attire and such, but for the mechatronics etc.
Open file (83.22 KB 568x386 Selection_005.png)
>>1751
>And lastly there's frobt-05-00136.pdf which claims to have typical air muscles beaten on almost all fronts by using an origami folding approach.
900% seems indeed to blow away all the competition. I wonder how many Newtons force you could manage with say a 12cm^2 paper face on the end of one of these types of effectors?
The fishing line was too hard to work with but I think I understand the winding pattern now. I'm gonna order a 3D printer soon and prototype a mechanism for winding mesh sleeves in various materials.

>>1738
Tubing is expensive and meshes are usually woven in a way to prevent stretching and swelling. I'm more interested in trying new ideas that haven't been done yet with hydraulic artificial muscles. Most of the research has only been done on pneumatic ones or stress testing high-pressure hydraulic ones. I want to find a silent and powerful actuation system that can move like a human being.

>>1745
The amount of strips or threads in the mesh affects the contraction length, less generally increases the contraction ratio. Bladder diameter and mesh diameter also affect the contraction length, smaller diameters have higher contraction ratios.
https://iopscience.iop.org/article/10.1088/1742-6596/908/1/012036/pdf
Open file (204.38 KB 411x991 Selection_006.png)
>>1758
>>1758
>I wonder how many Newtons force you could manage with say a 12cm^2 paper face on the end of one of these types of effectors?
Found something on p7. It's actually quite powerful tbh.
>>1761
>I'm gonna order a 3D printer soon
Always a big deal. Any idea what you're going to get anon?

>I want to find a silent and powerful actuation system that can move like a human being.
Godspeed Anon.
Open file (15.90 KB 304x288 happy_frog_boy.jpg)
>>1761
>dat mp4
ROBOPEPE WILL SOON BE REAL
>>1753
>clearing-house
heh poor choice of words. archival library is probably a better choice. :^)
Open file (189.12 KB 959x680 rotation.png)
>>1753
>One thing they didn't make clear however if that was something that can be controlled dynamically, or if it has to be designed and placed into the material at manufacturing.

There's a movable fitting at the bottom of the diagram and they mention that the user can change the initial braid fiber angle independently. Since they're only doing basic tests they probably changed it by hand between setups but adding in a mechanism to change it dynamically seems the logical next step.

The most interesting thing about this is the rotational possibilities when using them in parallel.

>>1758
Here's another paper on those origami shaped air muscles. More powerful and compact when compressed yes but they're not ideal for larger muscles on humanoid robots because of the large volume they take up.

>>1753
>>1768
For anyone having trouble getting access to research papers there's libgen.is that has almost everything when it comes to scientific articles.
>>1770
>paper
Thanks, I'll read it.

>large volume
Hmm. I wonder if they would serve inside the thorax and possibly inside the thighs then? They do seem to be able to bring a remarkable amount of force to bear given their quite light weight.

>For anyone having trouble getting access to research papers there's libgen.is that has almost everything when it comes to scientific articles.
Thanks for finding them and posting them here for us to read Anon, much appreciated. :^)
Open file (223.96 KB 669x335 Selection_008.png)
>>1770
>because of the large volume they take up.
Obviously, alternative designs can be managed tbh.
>>1772
What if you bundled long strips packaging up many of these things together and then activated them all simultaneously? Wouldn't that somewhat mimic the way actual muscle tissues in both structure and behavior?
>>1757
I'd prefer weaves over heating, I've seen memory metal before and its interesting, but I'm not sure if actuators are the proper application of it
I've been kinda out of it for a while, but I think using fishing line may be a good idea
>>1774
>I'd prefer weaves
>I think using fishing line may be a good idea
I'm assuming you'd want to weave fishing line as an actuator then? I think the anon who's buying a new 3D printer ITT is taking the same approach.
>>1779
That will be interesting to watch develop as a technology, thanks.
Open file (777.07 KB 1013x1559 FireShot Capture.png)
>>1763
An Anycubic Mega-S
>>1739
>The type of weave it uses may be unsuitable or it won't contract much
Behaves pretty well as far as I remember but I haven't tried this particular one.
Regardless short of buying the pneumatic muscle from Festo the wire sleeving is the cheapest and most effective bet. I've had good luck in the few times I've made pneumatic muscles in the past with it.

That said I still think a cable driven design is the better route to explore. I'm going to be doing some experiments in that direction after the holidays.
Open file (1.92 MB 600x360 smugklann.webm)
>>1765
>soon
klann linkages are really fun. Would make for a nice robo monmusume base.
>>1789
Great. I hope you let us know how the assembly & operation of this printer goes anon.

>>1793
Kek, that's awesome. Did you do this yourself anon?
>>1793
>klann linkages
https://www.diywalkers.com/

they won't be doing bipedal robots anytime soon by the look of it, but interesting. maybe an all-terrain walker for you and your robowaifu to ride around in?
Jansen's Linkage seems like an interesting mechanism too. I could kind of re-visualize this as a sort of quadruped animal shoulder. If you extended the lower 'leg' and added a movable paw on the end you'd be set. The motive impulse seems to be a single rotary mechanism and one fixed pivot point.
>>1799
it just occurred to me while watching this mesmerically that if you added a movable coupling at the pivot point that you could slide along an arc-shaped grove on demand, then you could create a synchronized motion that could rhythmically impart more force down at the end effector (by multiplying it's horizontal offset). This sliding motion at the coupling can also likely be tied directly to the rotary impulse using additional linkage.

The intended effect would be kind of like the way a greyhound dog can really 'lean into it' while chasing a rabbit.
>>1799
I wonder it there is some way to adapt this to a bipedal robowaifu?
>>1797
Long time ago as a side project.

>>1798
Yeah dynamics aren't the best on it. The site you link has some better walking linkages. You can see why in that video too klann linkages have a halting sort of gait as they don't have a very steady velocity curve. It does give a neat crab like effect though if that's what you're going for.
I want to make a snibbedy snab one at some point. Maybe I'll try out that trot bot linkage.

>>1799
Jansen linkage looks pretty but it doesn't have real great mechanics. If you only use it on flat ground it could be okay but they require 3 for a steady gait vs 2 for other walking linkages.
>>1808
>I want to make a snibbedy snab one at some point. Maybe I'll try out that trot bot linkage.
That would be cool. Good luck.
Open file (97.86 KB 696x1280 ShortStackExample.png)
>>1805 Give your waifu a tail, if you're ok with her chunky legs, you're golden. Hope you like short stacks anon. Friendly reminder that cable based drives are smaller, lighter, more durable, and most importantly, far more efficient then pneumatic systems. I actually really like pneumatic systems, I've worked on/with and designed pneumatic systems in the past. You can find industrial spares on ebay or in junk yards that usually work ok or need basic maintenance if you'd like to use them for manufacturing mechanisms. They are phenomenal as parts for industry, but their relatively low efficiency and high mass (compared to cable mechanisms driven by brushless motors) prevents them from being usable for waifus at or below standard human heights. Theoretically viable for giant waifus above 8 feet tall as the torque required to move increases drastically. Pnuematics only really make sense on stationary robots. Which is why they're used in amusement parks where a giant compressor can power many animatronics that only worry about the mass of their upper bodies.
Open file (54.38 KB 640x400 leveltable.jpg)
Extremely useful free software for waifu mechanical development. https://blog.rectorsquid.com/download-linkage/
Open file (20.66 KB 523x849 OneMotorLegPic.PNG)
>>1879 A design for a leg which runs with only one motor. It's simple, easy to implement into a design, somewhat efficient, and with an extra motor in the hip, can be used for walking, running, sitting, standing, all based on the angle of the mechanism. It's lacking a foot though, if anyone would like to help with that. (not sure how to include a copy of its file, julay doesn't support the file type.)
>>1692 Make sure you look at the elastic stiffness of the material, also known as the modulus of elasticity or Young's modulus. Not the total stretch at breaking point. Plastics often plastically deform greatly before breaking so looking at the total elongation does not give a good idea of stiffness. Same with metals. Mostly people design things to stay in the elastic limit. Im saying this because I'm pretty sure that UHMWPE is stiffer than nylon.
We really out to have a hands-specific thread here. So, I found this consortium has put together a tele-operated, dextrous, haptic, robotic hands rig. There is the robotics hand manufacturer themselves, a company that specializes in sensors to discriminate and quantify human touch sensory perception, a haptic-feedback glove manufacturer, and (for some reason) All Nippon Airways is footing the bill to develop the project. https://www.shadowrobot.com/telerobots/
What if we used tiny processors like Arduino Nanos to control the motions of just individual joints or limbs? Say we had 5 nanos for each hand, one for each finger. Wouldn't that be enough computing power to handle all the motion, sensing, and network communications for just that part. The software for a finger, say, could be tuned & tuned & debugged until it was just right, then could be treated as a little blackbox of technology. It could be networked together with all the other little parts until you built up a whole arm from the shoulder down, for example. Then each arm unit could be attached and networked into a central torso unit where the real computer power (at least the mobile parts) lies. Legs and head would be developed separately and connected in a similar way. This should at the least help decrease the complexity of the individual parts, making them cheaper to create, faster to develop, and individually modular/upgradeable wouldn't it?
>>2032 also can we effectively simulate a nano or one of these other devices properly ahead of time? we are still a ways off from assembling hardware yet, but we can probably finish up with a reasonable sim in under a year imo. can we go ahead and start writing the code for the electronics part now and had it run/debug inside the sim, also now ahead of time?
>>2032 >>2041 This is fatally flawed, but let's discuss the idea just the same. Let me list off some issues with this plan, in no particular order. 1. It is woefully inefficient to use several processors when one larger or faster one can do the same job. It will take more PCB space, cost more ($), and consume more power. Inter-processor communication and coordination (think multi-threading) also adds a huge overhead, on the order of a magnitude or two. If you need more GPIOs, there is the 44-pin ATmega164 or the 48-pin ATmega3209. If you need faster and a wider-bit bus, STM32F0 cortex-m0 processors can run at 48 or 72Mhz, operating on 32-bit integers (no FPU in cortex-m0), at half the cost of Atmel's (now Microchip) offerings. 2. The ATmega328P, the brains of the "Nano," is ancient tech. Depending on the algorithms you employ, the lack of a floating-point unit or hardware divide may make them unfeasible to run in real time on Atmel's 8-bit microcontrollers. There is a reason most drone controllers and ESCs use 32-bit Arm processors: brushless motor commutation and Kalman filters to name a couple. 3. You should not be trying to prototype novel algorithms and approaches on limited hardware. 4. You should not be trying to prototype on a computer simulator of limited hardware, just use the damn computer, at least until you've got a working prototype. 5. While not trivial, it is not particularly hard to make a "smart" finger. Use a few motors pulling nylon threads, potentiometers in the joints, and current sensors to limit applied power and be "compliant." Such a finger could move to any physically possible (constrained by the joint) position, at any specified speed, within specified power limits to prevent damage to itself and to objects being manipulated. What will give you trouble is the next step. 6. Leaving the low-level motor control to each finger, how will you coordinate the position, speed, and force of 5 fingers per hand in real time? What you are suggesting is a classic subsumption architecture. The problem with those is, as you go "up" in levels, the problems become more abstract and complex, and quickly. At only the hand level, you need to coordinate the movement of every finger at once, to form a desired shape in multi-dimensional space at precisely the desired time, and with just the right amount of force, 10s to 100s of times per second. And it gets worse all the way up until you reach the full-body level, where every milligram of mass and micro-Newton of force of every sub-system factors into whether you're standing up or falling down. Just because the processor in each finger can manage its' own motors and sensors, does not mean it have any awareness of higher levels of coordination or the ability to decide or act independently. 7. Starting with simulation is a good idea. In my experience, hardware production is 1 part design, 1 part firmware writing, and 8 parts finaggling moving parts, fine-tuning joint friction, making connectors, cutting shit, gluing shit, sanding shit, etc. Skip all of this and go straight to writing control algorithms in a high-level language and save trimming 3D-printed parts until you've got all of the kinks worked out. 8. If you're going to simulate, I would go up one level of abstraction and get rid of all low level physics, sensor feedback, and control. A finger can 3 cylinders and 3 joints, 2 of them constrained to 2 dimensions. There's a simulator thread, have you checked it out? Also, don't think I'm trying to discourage anyone from working with hardware. I just want people to be aware of the challenges involved.
>>2048 Thank you for the critique and advice. I'll do research into the points you brought up. While I still think this approach seems to vaguely mimic how life works (well only very vaguely haha) and therefore might be a good model to follow, I also take your warnings about the difficulties of this approach seriously. I'll check out the things you mentioned too.
Open file (42.72 KB 246x426 this one weird trick.jpg)
Open file (29.34 KB 852x662 Figure_1 mean loss.png)
I haven't seen this written about anywhere online but I found if you use attention on fully connected layers while using a binary cross entropy loss, you can quickly train a network to encode a one-hot vector into a dense encoding then decode it back to the one-hot vector flawlessly. It saves a ton of training time. # naively fully connected encoder = Linear(labels, embedding_size, bias) decoder = Linear(embedding_size, labels, bias) # this one weird trick encoder1 = Linear(labels, embedding_size, bias) encoder2 = Linear(labels, embedding_size, bias) decoder1 = Linear(embedding_size, labels, bias) decoder2 = Linear(embedding_size, labels, bias) out = encoder1(x) * encoder2(x) out = decoder1(out) * decoder2(x)
>>2377 >12yo builds house full of robowaifu meidos as a hobby >gives zero fucks when they fall trying to haxxor the mighty chobitsu >soft spot for his oneechan's avatar though minoru chad af tbh. any chance you can translate that into English for us anon? obviously it sounds like something good you've discovered there.
>>2377 Title your graph and label your axes, plebeian.
>>2379 It's not really a discovery. There are already papers that implement this and call it attention since the multiplication acts as a mask to focus on and attend to certain data. It's just not common knowledge how effective this simple structure is and can be easily dropped into models for huge gains. Here's a quick rundown for anyone new to machine learning: a one-hot vector has a single one set in it and everything else set to zero. They're used for classifying data so that each feature in the vector represents a different label or item. If you have 50,000 different labels though, they become too cumbersome to work with and need to be compressed into an embedding to work with them efficiently. An embedding is a compressed representation like how a 16-bit binary number can represent 0-65535. An embedding vector with 16 dimensions can easily and non-ambiguously pack up to 2^16 labels. One-hot vectors have fallen out of favor though because they're sparse and too difficult to train. Even encoding them into an embedding can be a costly operation, unless you use special sparse tensors and optimizers. So the common approach now is to start off with a trainable weight tensor of shape number_of_labels x embedding_size, then use a label's index to get its embedding, do stuff with the embedding, and output the log-probabilities of new labels or word tokens. It's also possible to look up the nearest neighbor to an embedding query to get an index but that's much slower than outputting log-probabilities without going full product keys no jutsu, a new technique for fast differentiable nearest neighbor search recently discovered last July: http://papers.nips.cc/paper/9061-large-memory-layers-with-product-keys.pdf So what is the point of improving on one-hot vectors then if they're rarely used anymore? It's just a toy example showing that backpropagation can quickly solve compressing these sparse one-hot vectors into a dense encoding and losslessly decompress them back to their original state, whereas a simple fully connected layer cannot, especially as the embedding size grows. I'm not really knowledgeable enough to understand why this works so well, but I know in fuzzy logic multiplication functions as a fuzzy AND and the bias term of the linear transformation can imitate a fuzzy NOT, basically creating a fuzzy NAND. NAND is functionally complete and can express all possible truth tables by combining NANDs together in various arrangements. Giving some of this completeness to a model seems to increase its ability to express a wider range of functions and provide a much better gradient to train on. >>2382 It's just the mean binary cross entropy loss over each minibatch for this toy example of a sparse autoencoder, using 4096 labels, an embedding size of 12 and a minibatch size of 128. I've been working on a repo in my spare time with different automated tests that people can run to see how much it improves performance on various tasks such as classifying MNIST digits and seq2seq language translation. I'll post it here when it's done, with labelled axes and titles of course.
>>2390 Alright, I think I understood the one-hot binary encoding notion. How many features do you think our robowaifu will actually need in production, hundreds of thousands, hundreds of millions? You seemed to rhetorically ask 'why', the replied with 'it's just a toy'. So, not useful, regardless of size then? I'm vaguely aware of NAND being a universal logic gate. Are there optimization benefits to not using them, or would it be simpler to learn for all the rest of us to just approach this stuff with NANDs in mind?
>>2397 I'm not really good at explaining things. I meant the example is just a toy problem to make it clear what it does. It's useful in all kinds of models not just this. It'll be more obvious how useful it is once I finish the other examples. Just thought I'd post it here for anyone developing their own models in the meantime because it can save a lot of training time. I've experimented with creating an actual fuzzy NAND limited to 0 to 1 probabilities before but I didn't have as much success with it as I did this. Especially once you start stacking layers in sequence, the gradient vanishes and makes training them extremely difficult. With this though it can emulate a fuzzy NAND in a loose way but it's not limited to probabilities and depending on the weights and biases some output features can emulate NAND and some can emulate AND. Also since each input feature before the multiplication has its own weight and bias, either input can emulate NOT. Some features can also pass through with no AND applied to them at all where the other linear layer's weights are close to 0 and bias close to 1. So it has a lot more ways to express functions than NAND alone. And the human eye can see about 500 megapixels, in red, green, blue and light intensity. There are also cells for detecting changes and motion. It's not just hundreds of millions but billions of features to process visual input at a human level alone, and that's not including all the other features the brain's processing is extracting from those raw features. It would be more useful to think about how many parameters are needed. If you were to compare the parameters in an artificial neural network to the synapses in the brain, the full GPT2 model is 1.5 billion parameters and the brain is estimated to have around 100,000 billion synapses. And there's more to the brain than just the connections. The length of the connections, timing of activations and chemistry going on are also extremely important. Dopamine and noradrenaline in particular have a great impact on brain function and play a role in synaptic plasticity, memory and cognition. So at least quadrillions of parameters would be needed to simulate a brain. Fortunately I don't think it will be necessary to do that to build a robowaifu. A standard laptop can run a trained MuZero model and easily beat a Go champion. Computers don't need to go through all the pains of survival, reproduction and analog computation. They can search and learn adequate programs that our brains are not even capable of running, let alone finding. Eventually the brain's capabilities will become a tiny subset of what computers can do.
>>2399 Alright thanks for the explanations Anon. Hopefully you'll make your work available here someday.
>>84 >Sanitation >When liquids are involved in any capacity, you must consider the possibility of nasty things growing in said liquid (microbes, mold). Especially the ones that'll inevitably hop over from your own filthy monkey hide. Are there any microbes that are harmless to us that would prevent the growth of harmful microbes that we could use?
>>2668 Yes. I'm not a professional microbiologist, but I've had to study it. Microbes are literally everywhere on the planet's surface. They form colonies known as The normal flora & fauna that not only spread out over our skin, for example, but in fact form an important part of the antimicrobial defense mechanism against deleterious ones. They do this simply by dint of occupying the available niches and therefore effectively denying these spaces to other species by preventing them from getting a foothold. In a system that continually recycles/sloughs off (like skin or the linings of our digestion systems) the net effect is basically entirely beneficial. In a closed system like those typically inside a robowaifu, even these would still present a basic maintenance issue and require cleaning. After all, these are living organism (at least the bacteriological ones are) and they generate waste products. Does that makes sense Anon? So for example algae would tend to take over as the standard flora inside a water container, and that's a good thing in the context just outlined, but they still would have to be regularly cleaned out, etc.
>>2670 >2021 >Anon eliminates microbial growth issues in his robowaifu coolant system by turning it into a microbrewery
>>2681 >2021.5 >Anon devises a finger sippy-straw extension crazy-curl style is best ofc that comes out from his wiafu's forefinger and he can drink microbrew from there.
>>2681 >turn coolant system into distillery >have old coolant drain from vagina >get drunk on robopussy
>>2668 >Microbes are literally everywhere on the planet's surface. They form colonies known as The normal flora & fauna that not only spread out over our skin, for example, but in fact form an important part of the antimicrobial defense mechanism against deleterious ones. That's where I got the idea. I just don't know of any that we could use off the top of my head. The microbes would need to grow well enough on what ever surface the skin is going to be made from. We might also want to make a nutrient lotion that we could rub on the robowaifu that would give our microbes a competitive advantage while killing off undesirable microbes so the good ones can establish themselves.
>>2684 The balances set up at this stage of the planet's bio-history definitely favor this characteristic quite naturally. It seems likely this should be quite doable I would start with the standard flora from human skin in your research Anon, since her contact with our skin will be regular ofc. WARNING Once you research this area, you can never go back. The consequences will never be the same :^) WARNING WARNING No seriously. You don't even want to know about these things tbh.
>>2689 Nigga I've got a minor in microbiology.
>>2691 Haha fine then. Please lead the charge in this area for us Anon. I was simply trying to spare the innocent, and leave them in their blissful ignorance. :^)
>>2683 I like your thinking, Anon. Can't wait to read news stories of men sucking rum out of their robowaifu's tit and how it's creating impossible expectations on women. :^) Apparently silicone will absorb alcohol and swell in volume 10-15% but eventually dissipate from it without deterioration. I can't try this myself but it would be an interesting effect to see. >>84 Copper could be used for storing fluids inside robowaifus. It's not well known but water sitting in a copper vessel for a few hours will kill almost all the bacteria and viruses in it and they cannot survive very long on the surface of dry copper without saline, although a few strains are more resistant to dry surfaces. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3312355/ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3991429/ It's also great at conducting heat and safe to drink from so long as the pH is above 6.0, although you wouldn't want to use it for drinkable fluids in an alcoholic coolant system since the pressure, heat and flow would increase the rate of copper dissolution. It could be used though for storing fluids inside that can't be accessed daily for cleaning maintenance, within some limits. Acidic fluids with a pH of 4.0 will leach enough copper ions after an hour to taste bitter and foul, after about 10 hours it will become toxic and cause nausea, so such a storage system has to be used with proper care and awareness. There's a lot of disinfo if you Google copper since it's an essential mineral the body needs to filter out toxins. For an anecdote, I've been drinking and cooking with water only out of a copper jug for years and have a lot more energy and health than most people my age. Copper is also necessary for the brain to produce noradrenaline from dopamine which is required to stimulate focused activity.
>>2754 Also while digging around it seems gasoline is particularly dangerous to silicone robowaifus. Luddites could easily use it to attack them. How can we make their skin resistant to attacks? Even oil paints could be a nuisance if robowaifus are ever to walk free one day. Someone could shoot them with a paintball to destroy their skin and you'd never know who did it. https://www.shinetsusilicone-global.com/catalog/pdf/rubber_e.pdf
This guy on /monster/ made a slime onahole, I thought you guys might be able to draw some inspiration out of it. https://smuglo.li/monster/res/22611.html
>>2754 > Can't wait to read news stories of men sucking rum out of their robowaifu's tit and how it's creating impossible expectations on women. rofl. PATRIARCHY!11
>>2754 >>2755 Solid data Anon, much appreciated. >>2756 Thank you Anon, the guy seems to be a talented modeler. Mind directing him to our vagoo thread for us?
>>2755 Interesting. We'll probably need an Anon who can conduct some tests for the group. One notion that comes immediately to mind is if she's going to be going out unescorted, then have her dress in some kind of sport-suit made of a protective material like a denser plastic.
>>2761 >have her dress in some kind of sport-suit made of a protective material like a denser plastic Their faces will still be exposed though. Maybe with all the bioweapons they're releasing we'll have anime robot girls buying groceries for us in hazmat suits. What a time to be alive!
>>2763 Kek. Maybe so.
Open file (67.17 KB 710x639 prototype.png)
So I've been working on developing a model that combines MuZero, the Intrinsic Curiosity Module, Go-Explore, Hindsight Experience Replay and Divide-and-Conquer MCTS to solve SNES RPGs and am faced with some pretty tough questions to solve: >1. How can an agent learn to set its own abstract goals? For example, if an agent attacks an enemy with a certain spell, it may wish to go back and try a different spell on that enemy. Perhaps enemies don't respawn again in the same area and the agent must try it on a similar enemy in another area. >2. How can an agent bail on a goal that is not achievable? Suppose an agent took a wrong turn of choices in a visual novel and its waifu dies from its decisions. It's unable to go back and do something different it wishes it could do unless it restarts the game. How can the agent discern possible goals from impossible goals? >3. How can it transfer that desired goal to a realistic goal? This is similar to the above two questions. In the case of Question 1 it wants to transfer that goal to attacking a similar enemy in a different area with the same spell. In the case of Question 2, it wants to make sure it doesn't make the same mistake again that got its waifu killed by transferring that desired goal to protect another waifu. >4. How can an agent be instructed to perform abstract goals with difficult to describe success conditions without a reward function? MERLIN (arXiv:1803.10760) provided some insight to this by training an agent to respond to entered text commands and getting a reward once it was achieved. However, it is limited by what you can implement as a reward function. Ideally you want to be able to instruct the agent to do many things. Something as simple as asking an agent to run in a circle is extremely difficult to implement into a reward function and only applicable to that one task. >5. How can novelty search be enhanced with a value function? There's biological evidence that dopamine release in animals and human beings is enhanced when the perceived value of the novelty is high, whether it's beneficial or an unforeseen threat. Should the value function be merely based off survival of the agent's identity? How can and should the agent's identity expand and develop as it gains experiences? For example, it might not control the other party members but they are working together as one unit. It seems like this would require implementing some sort of abstract identity the agent is trying to preserve while exploring novel states.
>>3182 Also a thought I've had for implementing goals is to represent them as a change in the state's latent variables. If the state has latent variables for counting money, a goal vector to increase money would be mostly zero except for positive values for the variables that count money. But I don't think it will work out that simply because the network will learn its own compressed encoding to store to more information.
>>3182 >'Simplified Prototype Model Haha, good thing too. :^) Nice graphic, btw. >So I've been working on developing a model that combines MuZero, the Intrinsic Curiosity Module, Go-Explore, Hindsight Experience Replay and Divide-and-Conquer MCTS to solve SNES RPGs and am faced with some pretty tough questions to solve: Q: how are you even doing that? In the most basic practical sense, I mean. Python-scripting modules to talk together? >1. How can an agent learn to set its own abstract goals? For example, if an agent attacks an enemy with a certain spell, it may wish to go back and try a different spell on that enemy. Perhaps enemies don't re-spawn again in the same area and the agent must try it on a similar enemy in another area. If an agent attacks an enemy with a certain weapon, a sword say, and it fails but observable damage occurs, then that's a clue that sword was at least somewhat effective so maybe try a different one--let's say a bastard sword. Kind of like if a BFG worked some but wasn't quite there, then whip out the BFG9000 on him. If on the other hand, not even the slightest damage occurred on the first attempt, then the algorithm probably should favor a alternate approach that doesn't involve that general class of weapon against the enemy at all. So, keeping track of some kind of a 'score-card association', that's temporally-constrained and bounded by the set of current circumstances (stored in a set of dynamic Python dictionaries, say) for both the weapon class and that particular weapon. So, just resort the multi-variate dictionary after the first encounter using the updated scoring to find the next top three choices. Then just pick one of those three at random and go for it. This should vaguely simulate a reasonable 'choice' in the circumstances. >2. How can an agent bail on a goal that is not achievable? Suppose an agent took a wrong turn of choices in a visual novel and its waifu dies from its decisions. It's unable to go back and do something different it wishes it could do unless it restarts the game. How can the agent discern possible goals from impossible goals? To continue on the above scenario, if you've tried a couple of different choices, and neither work then you'd probably begin to move the Goal variable more towards the flight mode and less towards the fight mode. Fail, one more time at it say, then just haul ass. If you need to re-spawn as a result of a series of bad choices then at the least you know not to go that precise sequence route again. You should be storing a record of the previous history of states, not just the last particular one. During each temporal snapshot of the upcoming play sequence compare 'frame-by-frame' the similarity to previous temporal sequences (pruning out duplicate irrelevancies such as taking the same entrance into the single-entrance-only-dungeon) and get a 'running commentary' on the current progress, as it were. Discern 'possible' from 'impossible' may prove, well, impossible. Do we always know the difference for example? If humans tend to fail at a type of endeavor then in general it's not unreasonable at this point in history to presume an AI will. But don't let the impossible stop you Anon, haha. After all, does the Bumblebee know it can't fly? Note: we did finally figure that one out in the end heh >3. How can it transfer that desired goal to a realistic goal? This is similar to the above two questions. In the case of Question 1 it wants to transfer that goal to attacking a similar enemy in a different area with the same spell. In the case of Question 2, it wants to make sure it doesn't make the same mistake again that got its waifu killed by transferring that desired goal to protect another waifu. So part a might just use the same type of sword against a similarly-classed enemy in a slightly different circumstance, based on the scoring approach mentioned above. For part b, it might use the previous encounter's 'commentary playback stream' mentioned above to make a brief analysis of the current circumstances, then tend to randomly choose slight variations early-on during the encounter to potentially alter the outcome (if it was a bad one) or tend reinforce the previous choice sequences (if it was a good outcome). >4. How can an agent be instructed to perform abstract goals with difficult to describe success conditions without a reward function? MERLIN (arXiv:1803.10760) provided some insight to this by training an agent to respond to entered text commands and getting a reward once it was achieved. However, it is limited by what you can implement as a reward function. Ideally you want to be able to instruct the agent to do many things. Something as simple as asking an agent to run in a circle is extremely difficult to implement into a reward function and only applicable to that one task. By enforcing behavioral dictates at a level above the straightforward reward-function-only level maybe? When all else fails, the agent can just rely on a pre-programmed set of directives provided by the oracle (you, the developer, ofc). For an example analogy, say a Christian might face a conundrum; "what to do about Satanism being promoted on your previous imageboard, a circumstance you yourself allowed to take root simply by ignoring it." That Christian might ignore his own past failures and any merely social current embarrassments and look outward to the Bible--a guidebook directed for him by a higher Oracle--for guidance. In some similar sense you might direct for particular outcomes for the agent at a higher level, when the lower-level systems such as reward mechanisms fail in a given circumstance. >5. How can novelty search be enhanced with a value function? There's biological evidence that dopamine release in animals and human beings is enhanced when the perceived value of the novelty is high, whether it's beneficial or an unforeseen threat. Should the value function be merely based off survival of the agent's identity? How can and should the agent's identity expand and develop as it gains experiences? For example, it might not control the other party members but they are working together as one unit. It seems like this would require implementing some sort of abstract identity the agent is trying to preserve while exploring novel states. <some sort of abstract identity the agent is trying to preserve while exploring novel states. Precisely. The Theory of Mind could be valuable here. Mere survival is a baser instinct and one we humans share with other biological systems around the planet. But as a human being higher-order priorities may come into play. Self-sacrifice for the greater good. A soldier throwing himself on top of a grenade tossed into their bunker to save all his buddies, for a decent example of this. Animals won't do this, but humans might. Animals don't seem to carry an internal 'sense of personhood' (aka Theory of Mind), but normal humans obviously do. These are more or less philosophical questions you're asking in the end. Well-worn-path philosophical answers may prove valuable here Anon. >Also a thought I've had for implementing goals is to represent them as a change in the state's latent variables. If the state has latent variables for counting money, a goal vector to increase money would be mostly zero except for positive values for the variables that count money. But I don't think it will work out that simply because the network will learn its own compressed encoding to store to more information. As indicated in my response above to 1 & 2, keeping a running, multi-variate scorecard in a set of dictionaries might help you sort through this problem Anon. It's also pretty much directly suited to DoD (data-oriented design) which by now is a very tried-and-true programming approach to vidya dev. https://gameprogrammingpatterns.com/data-locality.html In fact, many of the issues you're bringing up here have already seen attempted answers by vidya dev teams in the past, some approaches more successful, some less so. It might be informative to research the literature that's out there on this topic as well as guidebooks that exist. The GPU Gems set of collections comes to mind for me here. https://developer.nvidia.com/gpugems/gpugems/foreword Good luck. Great questions Anon.
>>1993 >and (for some reason) All Nippon Airways is footing the bill to develop the project. Quite possibly it's for remotely-operated maintenance procedures. Crawling along inside the hull of a Boeing 7X7 commercial airliner is not only cramped and uncomfortable for the maintenance tech or engineer, it's also a problem-space that literally 100's of thousands of pages of documentation immediately on-hand to perform the many different maintenance tasks required effectively. Back in the '90's this exact need was the single largest commercial driver of so-called wearable computers back then. Today, the problem is surely even more complex. Having a tele-operated robot that can perform maintenance tasks in situ can not only make the process far more efficient and comfortable for the tech, but it may be the difference between a craft being grounded at the ramp on the spot, or successfully making the turnaround time-window for the next flight out. Pushing Tin is big business, Anon.
>>3184 >Q: how are you even doing that? I'm taking the basic ideas behind them and blending them together. The dynamics network of MuZero predicts the next state and reward but rewards in ICM are novelty so it predicts that instead, which the MuZero MCTS uses to pick actions to explore unknown novel states. Adding the ideas behind Go-Explore with them required a new method though due to it having a different implementation. The basic idea behind it is it selects a novel state it didn't fully explore from an archive, returns to that state in the game, explores from that, and saves new interesting states to explore later into the archive and repeats. With Go-Explore though it saved novel game states and reloaded them to try something new, which isn't applicable to real-world problems. One thing I could do though is reload these states within its own model of the game and explore them within its imagination. In a sense the MCTS already does this but only from its current state. What I'm working with at the moment is concatenating a goal with the state as input to the policy network to choose an action towards reaching that goal, which is chosen by picking a novel state from the archive to reach and calculating the distance between the current state and that novel goal state. This allowed me to combine Hindsight Experience Replay since if the agent was unsuccessful reaching the goal, the goal can be replaced with a virtual goal to teach it that the series of actions it just performed is what you do to reach that new state from the previous one. Divide-and-Conquer MCTS is to break goals down into smaller and smaller subgoals that are solved independently and recursively, continuously improving its ability to create long-term plans in complex environments even without reaching the final goal. >So, keeping track of some kind of a 'score-card association', that's temporally-constrained and bounded by the set of current circumstances (stored in a set of dynamic Python dictionaries, say) The problem with using a Python dictionary is it's non-differentiable but there is a technique that can emulate dictionaries: https://arxiv.org/pdf/1907.05242 It's not really clear what the AI's internal state is storing, especially in the beginning when everything is initialized randomly. One possibility is using the state to create the dictionary key and save information there for a future similar state to read since the technique in the paper also works like a fast approximate nearest neighbor search. Also it doesn't have access to the game's internal state (it could but I find that a less interesting problem) so it doesn't know whether a sword did better damage or not. It has to learn to read the numbers shown on the screen, associate that with attacks, and associate that with defeating enemies more quickly, all by searching curiously within its own representations and model of the game world. >if you've tried a couple of different choices, and neither work then you'd probably begin to move the Goal variable more towards the flight mode and less towards the fight mode Since the network searches future states by novelty I might be able to implement this by its predictions somehow. I'm still learning the DC-MCTS paper but it might be possible to detect discontinuity in the created plans despite all attempts to find a path to the goal and then abort after a certain amount of effort that leads to a predictable state that isn't the goal. >to potentially alter the outcome (if it was a bad one) or tend reinforce the previous choice sequences (if it was a good outcome) The problem is how does it determine what is a good outcome and what is a bad outcome? Novelty search only avoids death because it predicts it will start all over again from an extremely predictable state. More often than not it will choose actions just to see what happens rather than because they're effective, unless its slogging through stuff it already knows to get to a more novel state. One of my requirements is to keep the system open-ended as possible. Later it will be trained to speed run the game as fast as possible by training against itself in self-play to learn a value function, but in the exploration phase it has to be capable of learning the consequences of actions and finishing the game without it. >When all else fails, the agent can just rely on a pre-programmed set of directives provided by the oracle (you, the developer, ofc). This might be possible later on when I embed a chat function into the network. My hope is it will learn to associate words with the game world and learn to ask questions it is curious about. For now as far as pre-programmed directives go with my implementation, the only thing that can be tinkered with are its goals and taking over control with the gamepad to guide it with supervised learning. >These are more or less philosophical questions you're asking in the end. There was an interesting paper that questioned the claim that AlphaZero was starting from zero that asked, "How much of the human mind is built-in, and how much of it is constructed by experience?" (https://arxiv.org/pdf/1801.05667) It's a really interesting question. I live in the forest and in my experience many wild animals are as conscious as human beings, far more intelligent and alert than many people actually but with far less brain power and physical capability to manipulate the environment. Some species of animals have been shown to have their own cultures and beliefs passed on from generation to generation, create rumors and gossip, and protect their families, especially crows that can also solve complex problems with tools to get a piece of food, despite having smooth brains and a lack of a neocortex that neuroscientists say is required for cognition. So I don't think these are innately human abilities but something far more basic. I'm not a neuroscientist but my suspicion is that cognition is created by slow-firing neurons. Thanks for all the feedback. It has given me a lot to think about and ideas to work with.
>>3189 >Thanks for all the feedback. It has given me a lot to think about and ideas to work with. Glad to hear it, YW. You have a remarkable mind, I hope we will see real robowaifus someday, but it will only happen with a metric shitton several, actually of hard-won effort. Good luck to us all.
Been working on some PyTorch code to create layers where each pixel is fully connected to neighboring pixels and by accident found out that by disabling the bias they emulate activity patterns similar to the brain. It's not fast but it isn't terribly slow either. Still playing around with them trying to understand their effects. They should also work in 3 dimensions by setting cross=True but I've only checked that it works with 3 channels. Pretty sure the way it's coded at the moment all the channels are connected which isn't how it's suppose to be but I guess that has its own uses. Code for the layers available at: https://gitlab.com/kokubunji/linear2d
>>3223 Interesting. Seems to me it might have some implications imblygin :-DDD alongside optical-flow fields Anon. Not sure if you're aware of them, but here's a list from Elsevier shilling related books. https://www.sciencedirect.com/topics/engineering/optical-flow-field >It's not fast but it isn't terribly slow either. If you want fast (ie, realtime) use GPUs. Here's a soft-intro from Libvidia: https://devblogs.nvidia.com/even-easier-introduction-cuda/ Don't want to deal with ugly (and error-prone) C code? Then use Thrust instead my recommendation BTW: https://github.com/thrust/thrust OpenCL is the tryhard runner-up in this field. https://www.khronos.org/opencl/ OpenCV would be by far the easiest approach if it in fact serves your purposes somewhere in maze of it's available libraries.
>>3223 that last one with the hippocampus is beautiful to watch, actually.
>>3224 I'm not sure what flow fields are useful for except in spatial transformer networks. I don't have any experience with video processing. Technically it's suppose to run faster on the GPU but in practice it's 100x faster on the CPU using Pytorch and Tensorflow doesn't even support grouped 1D convolutions. It's too far out of my expertise at the moment to write my own GPU kernel code interfaced with Pytorch. When I have some time I'm gonna try implementing it another way without Conv1D and maybe that could be ported one day. I found some code for depthwise 2D convolutions here: https://github.com/rosinality/depthwise-conv-pytorch
>>3227 Kek, the new implementation without Conv1d is 5x faster on CPU and 10x faster on GPU. Now as image size increases the GPU is exponentially faster. The only limiting factor is memory.
>>3229 >Now as image size increases the GPU is exponentially faster. The only limiting factor is memory. The exponential lag is due to the differentiation between the host and the device memory spaces, and the need to transfer the dataset from the host up to the device before processing can begin there. Generally, this is why you reserve GPGPU for really heavy lifting b/c this initial overhead can be a significant ratio of the total time with a small dataset. 1000x speedups on the GPU (say, against an 8-core i9 class) are not at all uncommon for proper dataset-problem-spaces, and this strikes me as being one of those kinds. Tbh, the tech still isn't there on the GPUs either, and as you implied, memory is the limiting factor. I worked a gig where we needed to process YUGE data (150-200 4K stereo sep plates @ 24fps), and the bus bandwidth simply wasn't even close. I had a chance to discuss this very problem with a pair of NVidia research scientists at GPUcon, and they were well aware of this issue in the HPC arena. 'Umm... eh, we're working on inventing an optical bus, yea that's it!' was the simp response. :^)
>>3227 that gif is interesting to watch. i can easily see how that could be used in realtime to 'rectify' text under all sorts of affine transforms. Pretty much any kind of details ofc not just text, it's just that regular geometric shapes are easier for me to think about heh. :^)
>>3230 >>3231 BTW, this is an example of the costly data-transfer statement using a simple Thrust case: // transfer data to the device thrust::device_vector<int> d_vec = h_vec; https://github.com/thrust/thrust#examples In this trivial example, the delay is nothing (but then again, so is the processing load so it may be best kept on the host for a given device. as always, PROFILE), but in a practical case this lag can be significant. Like all engineering, this requires analyzing the trade-offs involved and optimizing everything simultaneously towards your specific use-case goals. In our situation ofc, this simply means 'living', breathing robowaifus. No biggie, haha. :^)
>>3234 Oh, I forgot to point out that in that example, you pay the transfer toll twice, since you also have to transfers the 32-million ints vector back to the host after processing. // transfer data back to host thrust::copy(d_vec.begin(), d_vec.end(), h_vec.begin()); Here's the golden code and the whole reason for going to all the trouble in the first place. // sort data on the device (846M keys per second on GeForce GTX 480) thrust::sort(d_vec.begin(), d_vec.end()); Note the referenced perf comment is way out of date. Today's nominal GPUs would certainly be able to sort billions of ints a second. The second example only 'pays the toll' in the upload direction, since the result sum is merely a single int (and normally wouldn't even be transferred back down, but simply used onboard the device).
Open file (3.53 KB 335x335 thegrid.png)
>>3231 I need YUGE Sanic memory. With large images I have to augment training data on the GPU while it waits for new data to load. The layers also require a lot of parameters too, channels x height x width x 4. I created this for generating artwork in HD but it's impossible to fit more than a few layers this size into my GPU. It's not going to be feasible with my original idea of hundreds of neighboring pixels spread out exponentially so that any pixel can communicate with any other pixel within a maximum of 10 steps using a minimal amount of connections. Another option though might be to use Fibonnaci spirals or evolve random patterns to find what works best. On smaller scales though this should be capable of doing some pretty interesting stuff. With 6 neighboring pixels it can communicate in 3D. With 8 it can do 4D. The edges are now wrapped around onto a torus too. The channels also wrap around so they can all communicate. Out of curiosity I'd like to try converting images into 10D cubes, process them in that 10D space, output 10 latent variables, then input them into a 10D network and project it back into 2D to see if it can disentangle high dimensional data better that way. >>3234 Thanks, this will be really useful when I try to optimize it one day. I need my robowaifu and must Thrust. :^) >>3236 I had no idea sorting was that fast on the GPU. Damn, I could do some insane Monte Carlo Markov chain stuff with that kind of speed.
>>3237 >Out of curiosity I'd like to try converting images into 10D cubes, process them in that 10D space, output 10 latent variables, then input them into a 10D network and project it back into 2D to see if it can disentangle high dimensional data better that way. Now I'm curious, too! :^) GPU cores are autistic little retards that can only draw using MS-Paint, and further, always insist on spending 9'001 hours drawing each frame. But the power-crayons they eat give them super-fastness go powers that compress those hours down to just 42 nanoseconds, our-time heh. On a different branch altogether, I'm wondering if Flowtron might be usesful to us here on /robowaifu/ anywhere in the nearer-ish-term. https://arxiv.org/abs/2005.05957 >Abstract >In this paper we propose Flowtron: an autoregressive flow-based generative network for textto-speech synthesis with control over speech variation and style transfer. Flowtron borrows insights from IAF and revamps Tacotron in order to provide high-quality and expressive melspectrogram synthesis. Flowtron is optimized by maximizing the likelihood of the training data, which makes training simple and stable. Flowtron learns an invertible mapping of data to a latent space that can be manipulated to control many aspects of speech synthesis (pitch, tone, speech rate, cadence, accent). Our mean opinion scores (MOS) show that Flowtron matches state-of-the-art TTS models in terms of speech quality. In addition, we provide results on control of speech variation, interpolation between samples and style transfer between speakers seen and unseen during training. Code and pretrained models will be made publicly available at https://github.com/NVIDIA/flowtron https://news.developer.nvidia.com/flowtron-speech-synthesis-model/
Open file (64.79 KB 538x771 flowtron.png)
Open file (96.40 KB 600x892 forever average.png)
>>3240 Tacotron sounds better but that might be due to Flowtron trading off parameters to capture more expressiveness at the loss of quality. The paper isn't much use to us right now but one or two papers down the line something amazing might happen. Expressiveness in voices is hard to capture with machine learning because backpropagation has a huge problem with only converging to an average with very little variance. Flowtron is much better at capturing variance in pitch but I imagine the quality suffers because it's still finding the average for variances not being captured. This sort of ties in with an experiment I was doing last night to gain a better intuition of why my layers were failing to train quickly. I created a simple problem where it has to compress 4 pixels into 1 pixel and do this 4 times, then reconstruct the original pixels. The answer is obvious it only needs to pick the top-left, top-right, bottom-left, bottom-right and it's solved, but what does backpropagation actually do? It gets stuck on a plateau by choosing the average of all 4 of them. After about 6000 minibatches it finally figures out it only needs to choose one pixel and solves it after 8000. Backpropagation really sucks at disentangling information on its own. There's nothing guiding where the error should flow. This is a huge problem for the layers I'm creating since they don't rely on a kernel that can focus on picking one specific feature out of some data and ignore other gradients like convolutions do. Once I started stacking my layers this averaging effect became exponentially more difficult to solve. Four layers deep and it made almost no progress in an hour. To combat this problem I introduced a cost function into the weights of each layer to select at least n features minimum and minimize the selection of other features without overfitting to the features it has already selected: cost = ((n**0.5-weight.norm(2, feature_dim))**2 + weight.mean(feature_dim)**2 Which is PyTorch code for taking (the square root of n - the Euclidean norm across the feature dimension being selected from)^2 + (the mean across the feature dimension)^2 Now instead of taking forever to train I can make the network eight layers deep and it quickly disentangles the features within ten minutes by finding the most meaningful features first and slowly grinding away any connections that aren't necessary while also adapting to changes in the gradient when the feature selection becomes wrong. I haven't tested it yet on other types of neural networks but I think this cost function will be extremely useful in other machine learning applications if someone hasn't discovered this already.
>>3249 Hmm. Sounds like it's already doing a pretty good job. That would be pretty exciting if you made an original discovery here, Anon! :^) Idea: What if you took this guy's approach to "Multi Head Attention and Position wise Feed Forward Network", and used it as a preprocessor to your pruning optimization? Do you think it might help your system even more quickly resolve the important features by having a sort of 'pre-filtered' selection that already satisfies some pre-defined feature selection arbitration? https://medium.com/@kolloldas/building-the-mighty-transformer-for-sequence-tagging-in-pytorch-part-i-a1815655cd8 https://medium.com/@kolloldas/building-the-mighty-transformer-for-sequence-tagging-in-pytorch-part-ii-c85bf8fd145 https://github.com/kolloldas/torchnlp I believe this is the original paper (presumably you've already seen) >Attention Is All You Need >The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. https://arxiv.org/abs/1706.03762
>>3250 >That would be pretty exciting if you made an original discovery here, Anon! Assuming someone did do this, how would one even go about determining this?
Open file (191.95 KB 900x1260 the haruhi problem.jpg)
>>3251 Anonymous users solve AI problems puzzling data scientists for decades <They're building opensource catgirl meidos and it's terrifying. Only way to find out is either through an exhaustive search of literature or asking researchers working on similar problems.
>>3254 kek. we need this headline anon. keep moving forward!
Open file (50.26 KB 774x1024 swish.png)
Open file (127.81 KB 1516x964 lstm and gru.png)
Open file (182.10 KB 611x715 PurkinjeCell.jpg)
Open file (120.81 KB 667x625 hypertorus.png)
Open file (36.32 KB 540x480 portal.jpg)
>>3250 Yeah, I really like the idea of transformers. They're effective at overcoming this averaging problem. The problem with them though is they're expensive in parameters and compute. Another issue is once you start multiplying things together too much is it cuts off the flow the gradient to deeper parts of the network and they become untrainable due to the vanishing gradient problem. I think there's an important lesson to be learned from the Swish activation function, x·sigmoid(β x), found by automated search that outperforms ReLU: https://arxiv.org/pdf/1710.05941.pdf The beauty of Swish is it can bottleneck gradients to part of the network like ReLU, preserving them to reach deeper layers, but also open these bottlenecks back up again and allow the gradient to flow to other areas when necessary, whereas ReLU can't. Similarly once you start using products it creates dead zones in the gradient that are only activated under certain circumstances. It's effective but it seems like a crutch to overcoming the annoyances of gradient descent. It requires exponentially more parameters separated into different attention heads rather than actually compressing information together and distilling knowledge from it. SentenceMIM for example outperforms Nvidia's 8-billion parameter GPT2 model with just 12 million parameters. It's also worthy to note LSTMs used in SentenceMIM use sigmoid and tanh before multiplication which allow gradients to flow and not explode or vanish. So I think the way forward is forming more intelligent gradients rather than cutting it off completely in hope different parts of the network specialize. The neuromodulatory network in ANML that controls the flow of gradients is also interesting and amazing progress in this direction: https://arxiv.org/pdf/2002.09571.pdf What originally inspired my idea a year ago was dendritic branching. I wanted to capture this hierarchical tree-like structure somehow but working only with 2d images wasn't enough. What fascinates me about these branches now as I started to explore this idea in 3 dimensions is that they only either go left or right like binary search and in a computer we don't have to worry about the limits of spatial reality. We can wrap a 1d space around a circle and in one step reach any point on it. Similarly if you wrap a 2d space around a torus you can reach any point on it in two steps, corresponding to a step in each dimension. We can continue adding more and more dimensions to this torus. A way to mentally picture a hypertorus is to think of the game Portal and opening up 3 yellow portals and 3 blue portals on the 3 pairs of opposite faces of the room. So if we take a 729x729 greyscale image and reshape it into a 12D hypertorus that still has the same 3^12 features, now every pixel in the image is connected within 12 steps, using only 24 parameters per step or 288 in total for each feature, although so far in my early experiments it seems entirely possible to reuse the same parameters each step but it's more difficult to train and captures far less information. I still have to try it with my cost function in these higher dimensions and see how it helps. Either way, a fully connected layer with 3^12 input features to 3^12 output features would require 1315 GB of memory to compute but on a 12D hypertorus the features can be connected together with at most 2.9GB in a worst case scenario or 243MB reusing the parameters. A 3 channel 2187x2187 image could be processed in 15D with at most 120GB or 8GB reusing parameters which is entirely possible on today's upper end hardware. That includes the memory cost of the calculations and gradients, minus the overhead of whatever machine learning library is being used. Pytorch isn't really optimized for calculations in such high dimensions and circular wrapping of connections. What I'm working with at the moment requires duplicating the data twice for each dimension and padding each dimension by 1, so instead of requiring 3^dim in memory it requires 3*dim*5^dim which restricts me to using 10 dimensions at most, but if these higher dimensions prove useful for something then I'll certainly write my own code to optimize it. It's really fascinating just being able to watch it process data. Can't wait to start throwing images into wacky dimensions and see what the hell it spits out.
Open file (3.08 MB 1000x350 demo_yt.gif)
>>3262 I love the 'torus-wrapping' effect. Surely there's a fundamental & beautiful mystery of the universe hidden away in there somewhere! :^) I think you can make faster progress for the specific domain of "solving for features of character's behaviors" (if that's a goal) if you fundamentally move your domain of concern away from pixels on the screen, and onto the underlying actions of the characters themselves. This attention-shift would not only recover much of the information lost by the transformational-chains required for projection onto the 2D coordinates of the screen, but would also make the problem-space far more intuitive to solve at a fundamental level for the humans involved. For example, take a 3D line positioned between two specific points inside 3D space that you somehow wanted to track the features of in a video. If all you choose to work with at the start is the jagged string of pixels it forms on the screen, then figuring out the accurate positional details of the line, say, requires a fair amount of processing power to 'walk' all those independent pixels all along the way confirming by some means they are in fact part of the line, and then reconstructing them all into a line again with positional information derived as well. OTOH, if you just abstract the line into two 3D points at the very start---say each one end of the line---and then simply confirm the positions using the underlying pixels, you not only have a more accurate representation positionally but you are also performing far fewer calculations. To cast things in another light, and if I can put on the animator's hat for a moment, an important tool-class 3D animators use for character animations are so-called animation rigs. These aren't systems that force the animator to literally move every.single.vertex. of the entire character mesh to the desired location 3D space, but rather significantly abstract away those mundane details into the mere essentials, namely 'grab this thing and move it to here at this time-point'. For example, if at frame 1488 Anon wanted to move a character's hand to that important book to pick up and begin reading in subsequent frames, he would just lasso the rig's hand control icon (usually positioned floating near the particular hand itself) which would target the inverse kinematics control of the rig onto solving for that specific extremity, and then the animator would manipulate the transform control to position the hand at the book itself and set a keyframe. The system would then typically use some variation of LERP linear interpolation to fill in the in-between positions over time. Alternatively if he chose to, the animator could instead literally move every single vertex over the same number of frames, but the effort would not only prove far more tedious, but would certainly be more error prone and inaccurate than the smooth interpolation system. While the analogy isn't quite perfect, I think there are some credible similarities here in using just the pixels on the screen to pick out underlying features of the character's behavior. A far more efficient and feature-rich approach in my opinion would be to use pose-estimation on the character first, then use your system on this much smaller set of 'control points'. This focus on the underlying 'animation-rig' as it were of the characters will greatly simplify the computations involved and also make the process far more intuitive for us humans involved. >3D human pose estimation in video with temporal convolutions and semi-supervised training >In this work, we demonstrate that 3D poses in video can be effectively estimated with a fully convolutional model based on dilated temporal convolutions over 2D keypoints. We also introduce back-projection, a simple and effective semi-supervised training method that leverages unlabeled video data. We start with predicted 2D keypoints for unlabeled video, then estimate 3D poses and finally back-project to the input 2D keypoints. In the supervised setting, our fully-convolutional model outperforms the previous best result from the literature by 6 mm mean per-joint position error on Human3.6M, corresponding to an error reduction of 11%, and the model also shows significant improvements on HumanEva-I. Moreover, experiments with back-projection show that it comfortably outperforms previous state-of-the-art results in semisupervised settings where labeled data is scarce. Code and models are available at https://github.com/facebookresearch/VideoPose3D https://arxiv.org/abs/1811.11742 https://www.invidio.us/playlist?list=PLFt_AvWsXl0fEx02iXR8uhDsVGhmM9Pse

Report/Delete/Moderation Forms
Delete
Report

Captcha (required for reports and bans by board staff)

no cookies?