Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They did. So we installed a guardrail to prohibit it.

Now that’s called “burdensome regulation”, and we’re tearing all those guardrails down at an alarming pace.


How does this break free from Google? Isn't the Android that Google themselves writes and maintains the upstream of Graphene? Are they going to disconnect completely from upstream Android or something?

I wrote this essay to capture how I feel about the current state of the internet and share a potential solution.

The tl;dr is I'm saddened and scared in equal measure by the rise of bots, and I think we need to do something new in order to preserve any vestige of human-to-human digital communication.

My proposal is that we create invite-only networks where every account traces back through a chain of human trust. If a bot gets in, you prune the branch: remove it and every account it invited. The threat of losing your account (and your invitees losing theirs) creates real social accountability and makes moderation at scale practical.

There are kinks with the idea of course like the risk of false flags, witch hunts, and slow growth, but AI detection and CAPTCHAS are in a losing arms race with LLMs.


Being able to write a paper does not make you any better than someone copying it.

I find using git for my notes annoying, but it also works great with Syncthing if you have an always online peer. Before that I ran into conflicts that I guess would've been easier to resolve with git.

The study measures task completion on SWE-bench style issues, but that misses the main reason most of us write these files. I don't use AGENTS.md to help the model solve GitHub issues faster. I use it to stop the agent from doing dumb things that waste my time later.

Things like: don't use TypeVar in new code, always run migrations through our wrapper, never modify the shared proto files without updating the generated code. These are guardrails, not performance optimizers. The study's framing around "task success rate" misses that the value is in reducing the cleanup work after the agent "succeeds."

The finding that context files encourage "broader exploration" actually supports this. I want the agent to check more files and run more tests, even if it costs 20% more tokens. Tokens are cheap. Debugging a subtle regression the agent introduced because it didn't know about an invariant in the codebase is not.


One thing that is a game changer on GrapheneOS is the network toggle for apps. Turn off network access for your keyboard, camera app, calculator, files, etc.

I had my cello played in on a machine when I bought it as it was included no charge. I didn't A/B it though so I don't know how much of a difference it made.

It's not necessarily about that humans can't mistake the question too, but just that overall LLMs seem to have far less ability to correctly understand a prompt than the average human. And that the "intelligence" shown in its understanding of the prompt seems to be far less than its "intelligence" in its answers.

So it feels like a big area of limitation or a big bottleneck towards getting a good answer.


If your review of AI generated code is not comparable to writing it yourself, I have some real concerns about the quality of your reviews.

Been experimenting with OpenClaw for 2 weeks. Instead of writing about how it changed my workflows, I asked my agent to write it from its side. Surprisingly entertaining read.

Really powerful and interesting approach!

This take seems to require that models stop getting better at some capability level a little above where they are now. Is this a future event that you are very confident of?

I understand the sentiment, but the danger of these articles is that people give up when they should fight back.

This isn't the first industry mandated madness. Software engineers have supported any fad that their overlords dictated for a long time. It is always the mediocre 100 IQ people who act as mouthpieces for the industry and temporarily get ahead of their more intelligent colleagues.

It is no different now. You can see who is a paid shill and who is not. Python projects like NumPy, members of which are on the take from PyTorch, go to great lengths to rationalize AI usage. Anaconda people who take AI money join in.

Projects like Zig, which are much more interesting, move away from Slophub.

What concerns me is the silence of academics. There a enough tenured professors who could speak up, but they have been intimidated by various speech restrictions from different political parties over the last decade. Or they want that next industry grant.


That article speculates the OEM is Samsung but I find that very hard to believe. Samsung is totally beholden to Google. The discontinued their own DeX and Tizen smartwatch OS for Google alternatives and as for their "AI" features most of them actually come from Google.

Google would not allow this and they're way too entangled with Samsung.


When you build an app with Xcode, your .xcassets folders are compiled into binary .car files that ship with your application. . . . The “CAR” extension likely stands for “Compiled Asset Record” based on method names found in Xcode’s IBFoundation framework.

The article is well worth reading even without ingesting the specifics in order to follow the author's reverse engineering process.


In my country we have a large religious population who eschew the smartphone. This means that no government, banking, or other services require a smartphone.

on the other hand, communism is collapse at everyone's cost

If the reason the LLM retroactively invents for it's previous mistakes is still useful for getting the LLM to not make that kind of mistake again, then the distinction you're driving at doesn't matter.

I also found it sloppy. The reference for heart transplant patients getting memories from the donor actually says that there are self reported personality changes in 89% of heart transplant patients, which was statistically the same as other organs. So it doesn’t say what the author wants it to say.

Yeah, it's really that simple. I have tried various applications as well and keep coming back to my custom script because when a new voice model drops on HuggingFace it becomes possible to customize it immediately - rather than wait for that application developer to support that new model.

Or literally a path of the self, which is what the OP was getting at I think. And there is plenty of that in the software world. "I'm a Rubyist", "I'm a Pythonista", "A rustacean" and so on. There is plenty of identity ridiculousness. I've been a C programmer but I've also been a basic programmer an assembly language programmer, a PHP programmer, a FORTH programmer and a whole list of others. To me that collapses to "I'm a programmer" (even if the sage advice on HN by the gurus is to never call yourself a programmer I'm more than happy to do so). It defines what I do, not what I am, and it only defines a very small part of me. That's one reason why I can't stand the us-vs-them mentality that some programming languages seem to install in their practitioners.

It's also more than a little misleading to compare to the 2022 peak. Anybody who was hiring software engineers in 2020-2022 or being hired as one knows that was a wild and unsustainable period.

I think modern day instruments are competitive and only getting better (I bought a modern cello by an excellent maker and couldn't be happier).

One large difference today is instrument makers are starting to share their techniques with each other, whereas Stradivarius took that information to the grave.

With Strad you're getting a high end instrument but also paying through the nose for rarity and legend.


> I also remember the brief moment of disbelief when I noticed AI could really code better than me

If AI can program better than you, then you’re not a very good programmer, so of course you’re going to have a different opinion to the author.


Much like the epidemic we're going through of "everything is fake", every comment that even has a whiff of these tells will automatically be dismissed at "AI".

I've find myself doing it, a time or two.


That we embrace it generally. Even just proposing a naming convention would allow for agents to find the AI-sanctioned branch (or create it) and have at it.

(Maybe some AI agents can collaborate on "AILinux" and we can see how it measures up, ha ha.)


No but you can switch and get comfortable with fully cloud based solutions when you your computer ages and the prices for the now one are through the roof.

The DSA European digital wallet spec currently requires Google or Apple attestation, so not for much longer.

And that is mandated by the EU.


There are huge advantages to co-location within the same time zone (plus or minus 2 hours). India is practically half a day away from the US, meaning you only get overlap for an hour or two at the end of their working day.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: