That is super cool. Can we might simulate it in a virtual environment? If so, would that be the first matrix like virtual world?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Great, now I'm imagining a group of flies in leather trenchoats.
Mr. Fruiterson, how can you buzz, if you don't have a labium.
I think they already did that with a nematode.
openworm did not map all the neurons yet. But soon 🤞
This will be critical in the development of new fruit fly antidepressants.
This hypes me so much more than GPT-based tools
As neural networks in AI are inspired by nature, new techniques will surely follow the insights gained by such brain mapping research.
Neural networks in AI barely relate to any biological neural network.
Neural networks in AI are essentially a "scored" pachinko machine, with each peg having different numbers, which cause a "score" to go up for the "right" answer.
Basically, just a really fast, and expensive sieve filter.
I said "inspired by" and not "exact digital replicas".
In classical MLP networks a neuron is modeled as an activation function depending on its inputs. Connections between those are "learned", basically weights which determine the influence of one neuron's output on the next neuron's input. This is indeed Inspired by biological neural networks.
Interestingly, in some computer vision deep learning architectures, we have found structures after the training procedure which are even similar to how human vision works.
There are a bunch of different artificial neural network types, most – if not all – inspired by biology. I wouldn't be so bold to reduce them in that absurd manner you did.
I said “inspired by” and not “exact digital replicas”
its not, though. Its best described as inspired by a big pachinko machine, with weighted pegs.
It is almost in no way inspired by. Thats just propaganda being put out to make AI more palatable, and personable.
There are a bunch of different artificial neural network types, most – if not all – inspired by biology. I wouldn’t be so bold to reduce them in that absurd manner you did.
I would be, because it's factual.
If it was "inspired by" it would be able to tell the difference between running over a person, and avoiding a car, by example. It wouldn't start hallucinating when asked simple questions, because a biological brain acts in congruence with it's inputs.
Which happens because of a web of interconnections and spanning of multiple sphere's, with two major ones acting as checks on the other. Which is nothing like any current AI model.
In current models, each token has a limited number of interconnects, and always to a neighbor node. That is nothing like a biological neuronal network.
its not, though. Its best described as inspired by a big pachinko machine, with weighted pegs.It is almost in no way inspired by. Thats just propaganda being put out to make AI more palatable, and personable.
Get your facts straight.
The multi layer perceptron was first proposed in 1943 and was indeed inspired by biological networks: https://doi.org/10.1007/BF02478259
You can be sure this wasn't to make it "more palatable", wtf.
Regarding the rest of your reply:
You seem to be expecting a fully functioning digital brain as replica of the human brain. That's not what current ANNs in modern AI methods do.
Although they are in their core inspired by nature (which is why I originally said that advancements in brain research can aid the development of more advanced AI models), they work structurally different. And ANNs for example are just simplified mathematical models of biological neural nets. I've described basic properties before. Further characteristics, like neurogenesis, transmission speeds influenced by myelinated or unmyelinated axons, different types and subnets of neurons, like inhibitors, etc., are not included.
There is quite a large difference between simplfied models which are "inspired by" nature and exact digital replicas. It seems you are not accepting this.
Welcome to Microsoft 476! With new features such as fly! That's right! A real simulated fly will help you. Introducing, "the wall" it's where the fly lives! It will stay out of your way unless you call it. Setup your own buzzing noises! It will remind you in perfect stereo fly sounds about your incoming meeting!
So to answer what I assume is going to be the most common question here, this is a circuit diagram, but every neuron is like its own little programmable integrated circuit with a small amount of internal memory and those aren't mapped here. So this is an excellent model to explore how neurons connect to each other and get insight into cognition and the function of the brain but it is far from something that can simply be simulated on a computer.
can this map be... simulated?
perhaps put into a computer program with simulated inputs from a virtual environment?
There’s alot of “calculations” done internally in a neuron that we cant map yet
Fucking DLLs
Shared objects on Linux.
I've been using Linux for longer than I've been an adult, I've worked in the field for around fifteen years, and TIL what .so means. Thanks!
We're probably pretty similar. I haven't been a Linux user as long as I've been an adult (close), but if you include BSDs, then I have, since I dabbled w/ FreeBSD as a kid.
I'm a SW engineer and I like compiled languages, so linking in C libraries comes w/ the territory. If it wasn't for that, I would probably just call them DLLs (dynamic-link library, FWIW), since they do the same thing at the end of the day.
I was a sysadmin, now I'm nominally devops. I haven't done real development for probably 21 years, so I didn't interact with SO's or DLL's much. (I actually did know what DLL means, but I have no clue why. Thanks though!)
I didn't use pure BSD until I was eighteen - I think I used Macs a time or two before then. In fact, I'm pretty sure the first time I used BSD was installing it on an iMac I bought off of Craigslist and I did so to experiment with its firewall functionality. What did you do with it as a kid?
Honestly, not much. I took a programming class at the local community college during high school, and an older gentlemen gave me an install disk. So I installed it on an old PC and tinkered a bit, but it didn't have internet so I only had the base install.
I switched to Ubuntu my freshmen year at college because windows broke on my rented computer and I didn't want to deal with IT. I tried switching to FreeBSD on my laptop a couple years later, but it wouldn't sleep properly, so I went back to Linux (Arch at this point). I still used FreeBSD on my toy servers and NAS, which ended a few years later when I switched everything to openSUSE (Leap on server/NAS first, then Tumbleweed later on my desktop and laptop).
That said, my kids haven't really used Windows, they either use my computers running Tumbleweed or ChromeOS at school.
I still really like FreeBSD, but I don't use it because I had issues getting Docker to work (need for self-hosted LibreOffice Online), and I prefer everything to be same family, and having openSUSE work everywhere is nice. It still holds a place in heart though, so I make sure my personal projects work properly on FreeBSD. Who knows, maybe I'll use it if I ever replace my router with a DIY setup (currently use Mikrotik).
You sound like someone with whom I'd get along well. My Linux origin story isn't terribly dissimilar to your BSD one; I hosted a file server on a Windows server when I went to college. I met another, somewhat older as I went to college early, nerd there and he recommended replacing my Windows server with Linux. I don't recall if he gave me the install disk. I think my first Linux system was Red Hat before they became Enterprise and my friend was right - it worked better than a Windows server. I tried to convert all of my systems to Linux at that point, but I still lived with my parents and they paid for AOL for Internet, which (so far as I could tell at the time) had no Linux compatibility. Also, I gamed a lot and back then there was nothing like proton or even (so far as I knew) WINE.
I had to look up what Tumbleweed was after reading your post. I haven't used any form of SUSE for years and years. I use mostly Fedora for my workstations or CentOS/Alma/Rocky for my servers because I was an RHCE for a while (now expired, I think) and was most comfortable in that ecosystem.
My kid has never touched Windows AFAIK; the only Windows system in my network is my wife's work computer (and one VM I setup while experimenting with something, but that's gone now). The kid has two tablets and a laptop I put Linux on, but they're too young to really care about anything but YouTube on those systems. I'll get 'em yet, though!
What got you on SUSE?
The story there is fairly simple. Basically:
- Ubuntu - got an install disk on campus or from a friend, don't recall; my sound and wifi broke when upgrading major releases, so:
- Fedora - it's what my university used in the CS labs, so I figured I'd try it out; release upgrades took forever (>1 hr), so:
- Arch - coworker at my internship recommended I try it out, so I gave it a shot and loved it; stayed there for about 5 years
- openSUSE Leap - FreeBSD didn't support Docker properly and I didn't trust Arch on a server, so I went looking; holy grail was something stable for servers and rolling for desktop, and Debian Testing (we used Debian stable at work) wasn't quite new enough and Sid scared me, so I tried out Leap on a VPS; Leap worked out well and I actually liked Yast, so I figured I'd try out Leap on my laptop; I liked it, but decided I wanted fresher packages, so:
- Tumbleweed - I upgraded to Tumbleweed and didn't have issues for over a year (broke less than Arch), so I converted my desktop Arch install to Tumbleweed, and I've been happy for >5 years now (longest I've been on any distro, I think)
I wanted the same system on my desktop and server, and I really like rolling releases on my desktop. openSUSE was pretty much the only one that actually offered both. They were ballsy enough to officially support btrfs
in production, so I figured switching my NAS over to it wouldn't be a terrible idea, especially since I only needed RAID mirror so the write hole on raid 5/6 wouldn't be an issue. The first time an update went south on my desktop (Nvidia, go figure), snapper rollback
saved me a bunch of time, and that's what sold me on it. I since replaced my GPU w/ AMD and I haven't had a single issue w/ updates since, whereas on Arch I'd have 3-4 manual interventions/year unrelated to Nvidia.
And yeah, my kids haven't used my computers for anything other than Steam, YouTube, and some random web games. But they're technically on Linux and have successfully navigated both GNOME (used for a bit before KDE had proper Wayland support) and KDE, so they're more seasoned than some new Linux users.
Awesome answer. Thank you for taking the time. I've enjoyed getting to know this part of your story.
Yeah, any time! It sounds like we had a relatively similar entry into *nix. Have a fantastic day. :)
"That I don't we can map yet"
Seems like your brain failed to calculate a few things when trying to write that.
interesting. what kind of "calculations"? what do we know about it?
I recall seeing the brain of like an amoeba or something very small with only like 100 neurons or something being simulated.
You certainly didn’t see an amoeba brain. They are single cells. I wonder if you heard about the efforts to do the same thing with a nematode?
Openworm
Came here to say the same thing. It worked very well
I’m not sure I understand the distinction they’re trying to make between the connectome and the projectome.
The connectome is a map of individual neurones. The projectome is a map of how larger regions interconnect. Particularly the relative strength of the links.
It's the difference between a detailed road map vs the relative road capacity between countries. It cuts out a lot of fine detail to see larger patterns. Both are useful, but in different ways.
I’m actually stunned that a cell-level map, the connectome, is even possible. Are we saying that every fruit fly has all these individual brain cells in this very particular configuration? I always assumed that major brain organelles might be the same from individual to individual but not down to the level of individual neurons. Am I reading something wrong or are individuals really similar as this?
Provisio, I have not read up on this particular experience.
Fruit flies, as used in labs are not like their wild cousins. They have been bred to be exceptionally consistent, since this makes X-Y experiments easier. If you take genetically identical eggs, and raise them in effectively identical conditions, you get almost the same wiring.
There will still be areas of variability, but a lot will be conserved. This is likely an "average" wiring. Once you have even an approximate baseline, you can vary things and see how the wiring adapted.
The way I understand it:
Connectome: Displays the synapses between individual neurons
Projectome: The links between regions of the brain via neurons that synapse across regions (basically a subset of the connectome)
So if the connectome is a map of every road, highway, dirt path in the USA, the projectome is a map of the interstates between major cities.
Please someone tell me if this is way off base.
I believe it's this one.
https://flywire.ai
https://github.com/seung-lab/FlyConnectome
But I also found second one.
https://www.fruitflybrain.org
https://github.com/fruitflybrain
Old news, but still very cool.
Ok. Now map out how in an entire airport, they ALWAYS seem to manage to find me, after I make lunch. Then as I clap my hands to kill them, they use evasive techniques to avoid being killed. Then they come right back 2 seconds later. Until either I go insane and move, which they follow, or I finally get them with a clap, and they're dead.
I heard that they keep coming back because your efforts to swat at them were clumsy and never a real threat