Grimy

joined 1 year ago
[–] Grimy@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (4 children)

I'm very critical of American imperialism but I fail to see how the US is using Ukraine to hurt Russia.

The fault always lies with the invader, Russia did this to itself. If I see someone getting stabbed and throw him a knife, implying I'm using him to hurt the other person attacking him is silly. Russia can leave anytime.

I do agree tankie is thrown around far too much, I've been called one myself just for talking shit of the military, even though I never mentioned an other country or a political idealogie.

The spread of the word as well as the constant villainization of China seems like prep for red scare 2.0, so we can have the population support bombing villages full of civilians (again).

[–] Grimy@lemmy.world -1 points 3 days ago* (last edited 3 days ago)

Ahhhhh, I get it, sorry I misunderstood. It's very rare I get comments actually agreeing with me on the subject.

[–] Grimy@lemmy.world -5 points 3 days ago* (last edited 3 days ago) (7 children)

"Daddy Gaben can do the bad thing because he was first to do it, heehee"

The fact is we need new regulations and laws but the government will never act if bootlickers like you are the majority.

Stop defending billionaires that are actively robbing you.

[–] Grimy@lemmy.world -5 points 3 days ago (3 children)

Would you ever say the same thing if I was being critical of Bezos or Musk.

The fact is Steam is the only company that benefits from an army of simps ready to defend Gaben at the slightest hint of negativity.

This article is literally a puff piece.

Are you sure I'm the one part of the hive mind, with my 36 downvotes? Your comment is very ironic.

[–] Grimy@lemmy.world 6 points 1 week ago* (last edited 1 week ago)

I also think location has to do with it. The dev team behind Coromon are in Europe while both Nintendo and the Palworld devs are in Japan.

From what I understand from a previous article, japanese patent laws can be quite strict.

 

On Friday, TriStar Pictures released Here, a $50 million Robert Zemeckis-directed film that used real time generative AI face transformation techniques to portray actors Tom Hanks and Robin Wright across a 60-year span, marking one of Hollywood's first full-length features built around AI-powered visual effects.

Metaphysic developed the facial modification system by training custom machine-learning models on frames of Hanks' and Wright's previous films. This included a large dataset of facial movements, skin textures, and appearances under varied lighting conditions and camera angles. The resulting models can generate instant face transformations without the months of manual post-production work traditional CGI requires.

You couldn't have made this movie three years ago," Zemeckis told The New York Times in a detailed feature about the film. Traditional visual effects for this level of face modification would reportedly require hundreds of artists and a substantially larger budget closer to standard Marvel movie costs

Meanwhile, as we saw with the SAG-AFTRA union strike last year, Hollywood studios and unions continue to hotly debate AI's role in filmmaking. While the Screen Actors Guild and Writers Guild secured some AI limitations in recent contracts, many industry veterans see the technology as inevitable. "Everyone's nervous," Susan Sprung, CEO of the Producers Guild of America, told The New York Times. "And yet no one's quite sure what to be nervous about."

[–] Grimy@lemmy.world 8 points 2 weeks ago

Lemmy let's them respond to you even when blocked. Kind of funny to block someone for harassment and still see a comment removed pop up behind one of my comments a few hours later.

I find it fair in a way, I just wish it hid it from me completely since curiosity usually gets the best out of me. I've only had to block one person this whole time anyways, so it's really not the end of the world either.

[–] Grimy@lemmy.world 11 points 2 weeks ago

Why use a search engine at all when you can have your browser directly text your mom.

[–] Grimy@lemmy.world 3 points 3 weeks ago

It's 400 hours of audio, the transcripts ended up being 5 million words, and only snippets of it are useful.

[–] Grimy@lemmy.world 13 points 3 weeks ago (3 children)

These important limitations highlight why it's still important to have humans involved in the analysis process here. The NYT notes that, after querying its LLMs to help identify "topics of interest" and "recurring themes," its reporters "then manually reviewed each passage and used our own judgment to determine the meaning and relevance of each clip... Every quote and video clip from the meetings in this article was checked against the original recording to ensure it was accurate, correctly represented the speaker’s meaning and fairly represented the context in which it was said."

It's literally the paragraph right after.

They verify it.

[–] Grimy@lemmy.world 0 points 3 weeks ago

I was actually thinking of setting up something similar for the mountain of ufo related docs they keep dropping every few months. They tend to use obscure words and even slip in typos so just searching through them doesn't work very well.

 

Beautiful piece imo. There's a higher res version on their site.

75
submitted 4 months ago* (last edited 4 months ago) by Grimy@lemmy.world to c/technology@lemmy.world
 

Meta's issue isn't with the still-being-finalized AI Act, but rather with how it can train models using data from European customers while complying with GDPR — the EU's existing data protection law.

  • Meta announced in May that it planned to use publicly available posts from Facebook and Instagram users to train future models. Meta said it sent more than 2 billion notifications to users in the EU, offering a means for opting out, with training set to begin in June.

  • Meta says it briefed EU regulators months in advance of that public announcement and received only minimal feedback, which it says it addressed.

  • In June — after announcing its plans publicly — Meta was ordered to pause the training on EU data. A couple weeks later it received dozens of questions from data privacy regulators from across the region.

 

A bipartisan group of senators introduced a new bill to make it easier to authenticate and detect artificial intelligence-generated content and protect journalists and artists from having their work gobbled up by AI models without their permission.

The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) would direct the National Institute of Standards and Technology (NIST) to create standards and guidelines that help prove the origin of content and detect synthetic content, like through watermarking. It also directs the agency to create security measures to prevent tampering and requires AI tools for creative or journalistic content to let users attach information about their origin and prohibit that information from being removed. Under the bill, such content also could not be used to train AI models.

Content owners, including broadcasters, artists, and newspapers, could sue companies they believe used their materials without permission or tampered with authentication markers. State attorneys general and the Federal Trade Commission could also enforce the bill, which its backers say prohibits anyone from “removing, disabling, or tampering with content provenance information” outside of an exception for some security research purposes.

(A copy of the bill is in he article, here is the important part imo:

Prohibits the use of “covered content” (digital representations of copyrighted works) with content provenance to either train an AI- /algorithm-based system or create synthetic content without the express, informed consent and adherence to the terms of use of such content, including compensation)

view more: next ›