

Not only that, the Android Police article mentions they had a lot of trouble merging the internal branches and the public branches, so I’m guessing as time went on they’ve diverged more and more.
Not only that, the Android Police article mentions they had a lot of trouble merging the internal branches and the public branches, so I’m guessing as time went on they’ve diverged more and more.
There are apps which display the user karma though.
Does China do this too to people that visits the country? I’m wondering if the US is already worse than China or if they’re on the same level now.
It recycles people’s knowledge about emails in a nice way. It could even have some sort of autocomplete as you start typing the instance name to prevent mistyping.
I remember listening to a podcast that is about scientific explanations. The guy hosting it is very knowledgeable about this subject, does his research and talks to experts when the subject involves something he isn’t himself an expert.
There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.
So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).
Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.
In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.
Seems like the US and China will be good allies, judging by how things are going.
If I can offer any tip, it would be to find someone you can become a sort of pen pal in an app like Tandem from a different place you’re interested in and then you might automatically get a better perspective of that place.
A lot of people will be happy to tell you about their place, and you’d automatically have lots of people willing to talk to you just from being a native English speaker: as you probably know, there’s an endless list of people out there in the world who know English enough to sort of communicate, but they don’t have much chance to actually talk with a native in a chat where they won’t have to be embarrassed to make a few mistakes.
If that’s all you need, I’d say most countries 😆
Salaries outside the US aren’t gonna be as high, but the cost of living is also much much lower. As a Brazilian, I actually save more than 50% of my salary (while having a middle class lifestyle), and was able to buy an apartment without any debt at 30.
Also if you go anywhere in Latin America you’ll see the average diet is quite healthy.
This seems to me like just a semantic difference though. People will say the LLM is “making shit up” when they’re outputting something that isn’t correct, and that happens (according to my knowledge) usually because the information you’re asking wasn’t represented enough in the training data to guide the answer always to that information.
In any case, there is an expectation from users that LLMs can somehow be deterministic when they’re not at all. They’re a deep learning model that’s so complicated that’s impossible to predict what effect a small change in the input will have on the output. So it could give an expected answer for a certain question and give a very unexpected one just by adding or changing some word on the input, even if that appears irrelevant.
Not sure why this specific thing is worthy of an article. Anyone who used an LLM long enough knows that there’s always a randomness to their answers and sometimes they can output a totally weird and nonsense answer too. Just start a new chat and ask it again, it’ll give a different answer.
This is actually one way to know whether it’s “hallucinating” something, if it answers the same thing consistently many times in different chats, it’s likely not making it up.
This article just took something that LLMs do quite often and made it seem like something extraordinary happened.
I wish most YouTube videos had this quality instead of just shallow “copy pastes” that other 20 channels are also doing.
Saved this one offline for the future.
Considering that Google took the chance to make a scary warning for users on the Google Play, I wouldn’t be surprised if Google used its influence to make this sound worse than it was.
I wonder if they say people should be careful with Chrome 😂
Still what makes you think that an alternative isn’t going to get flooded with AI too? That ship has sank already.
And the same applies to smartphones since a while ago.
That’s a very good article.
Meanwhile there was a video interviewing illegal Brazilians in the US who supported Trump. Most of those were in favor of expelling immigrants, if they broke laws (ignoring they are illegal themselves). They saw themselves as “good guys”, so it wouldn’t apply to them. And some of them complained that nowadays there was more immigrants and hence more competition for jobs, so they wanted it to be more difficult to get in. Typical “now that I’m here, kick the stairs”. I just assume most of his supporters are selfish people who wouldn’t miss the chance to throw someone else at a bus to have some personal gain—as long as they don’t stain their own hands with blood.
I would look for the video, but it was in Portuguese anyway.
Having more people does help, but only to a certain extent. At some point, it just becomes difficult to moderate and having a higher number of casual users that don’t give a shit about the rules.
Making Mercosur more valuable is a good thing for South America itself in the sense that it keeps countries from doing radical movements. For example, Venezuela was suspended from it because of their violation of human rights.
I mean, you can argue that if you ask the LLM something multiple times and it gives that answer the majority of those times, it is being trained to make that association.
But a lot of these “Wow! The AI wrote this” might just as well be some random thing that came from it out of chance.