Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've found that to be accurate when asking it questions that require ~PhD level knowledge to answer. e.g. Gemini and ChatGPT both seem to be capable of answering questions I have as I work through a set of notes on algebraic geometry.

Its performance on riddles has always seemed mostly irrelevant to me. Want to know if models can program? Ask them to program, and give them access to a compiler (they can now).

Want to know if it can do PhD level questions? Ask it questions a PhD (or at least grad student) would ask it.

They also reflect the tone and knowledge of the user and question. Ask it about your cat's astrological sign and you get emojis and short sentences in list form. Ask it why large atoms are unstable and you get paragraphs with larger vocabulary. Use jargon and it becomes more of an expert. etc.

 help



I don't know about algebraic geometry, but AI is absolutely terrible at communications and social sciences. I know because I can tell when my postgraduate students use it.

Are you sure? What about when you use it? e.g. I suppose asking it to critique experimental design and analytical methodology, or identify potential confounders and future areas to explore, or help summarize nearby research, etc.

If you can tell when your students use it, presumably you mean they're just copying whatever, which just sounds like that student doesn't know what they're doing or is being lazy. That doesn't mean the model isn't capable; it means an incapable person won't know what they'd want to ask of it.

Additionally, even for similar prompts, my experience is that the models for professional use (e.g. gpt-codex) take on a much more professional tone and level of pragmatism (e.g. no sycophancy) than models for general consumer entertainment use (e.g. chatgpt).


> What about when you use it?

I use AI for coding, but not for anything involving writing text, it's just horrendous at it. It just spews verbose slop, devoid of meaning, original thought or nuanced critique.

> That doesn't mean the model isn't capable; it means an incapable person won't know what they'd want to ask of it.

So it's user error again then, eh? PhD experts are able to help even "incapable" students, that's often a big part of their job.


Weird, my experience is that they are full of nuance. e.g. here is a snippet of my discussion with Gemini:

> Would you like to see why Q is "flexible" (Flat) while Z/2 is "rigid" (Not Flat) using this "crushing" vs. "preserving" logic? It explains why localized rings are almost always better to work with.

> Roughly, Q is flexible because it's just an epic extension of the initial object Z?

> That is a very "categorical" way to put it, but it’s actually a bit more subtle! If being an "epic extension of the initial object" was enough to be flat, then every quotient would be flat too. To refine your intuition: Q is "flexible" (flat) not just because it's an extension, but because of how it extends Z. Z/2 is a Quotient: It adds a constraint (2=0). Constraints are "rigid." As we saw, if you multiply by 2, everything collapses to zero. That's a "hidden kernel," which breaks left exactness. Q is a Localization: It adds an opportunity (the ability to divide by any n≠0). This is the definition of "flexibility."

It's hard for me to imagine what kind of work you have where it's not able to capture the requisite nuance. Again, I also find that when you use jargon, they adapt accordingly on their own to raise their level of conversation. They also seem to no longer have an issue with saying "yep exactly!" or "ehh not quite" (and provide counterarguments) as necessary.

Obviously if someone just says "write my paper" or whatever and gives that to you, that won't work well. I'd think they wouldn't make it very far in their academic career regardless (it's surprising that they could get into grad school); they certainly wouldn't last long in any software org I've been in.


> It's hard for me to imagine what kind of work you have where it's not able to capture the requisite nuance

I teach journalism. My students have to write papers about journalism, as well as do actual journalism. AI is very poor at the former, and outright incapable of doing the latter. I challenge you to find a single piece of original journalism written by AI that doesn't suck.

> Obviously if someone just says "write my paper" or whatever and gives that to you, that won't work well

But it would work extremely well if they told a PhD to do it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: