the_arun an hour ago
carlsborg an hour ago
Good lens.
The crux of the auto research repo is basically one file - program.md which is a system prompt that can be summarized as “do this in a loop: improve train.py, run the training, run evals, record result. Favor simplicity”. The other files are an arbitrary ML model that is being trained.
_pdp_ 40 minutes ago
This has been the standard approach for more complex LLM deployments for a while now in our shop.
Using different models across iterations is also something I've found useful in my own experiments. It's like getting a fresh pair of eyes.
datsci_est_2015 an hour ago
I can’t imagine letting an agent try everything that the LLM chatbot had recommended ($$$). Often coming up in recommendations are very poorly maintained / niche libraries that have quite a lot of content written about them but what I can only imagine is very limited use in real production environments.
On the other hand, we have domain expert “consultants” in our leadership’s ears making equally absurd recommendations that we constantly have to disprove. Maybe an agent can occupy those consultants and let us do our work in peace.
jpcompartir an hour ago
The bottleneck in AI/ML/DL is always data (volume & quality) or compute.
Does/can Autoresearch help improve large-scale datasets? Is it more compute efficien than humans?
1970-01-01 16 minutes ago
That's such a weird switch. There's lots of free medical imaging online. Example: https://www.cancerimagingarchive.net/
love2read an hour ago
dvt an hour ago
[1] https://github.com/ykumards/eCLIP/commits/main/autoresearch
motbus3 12 minutes ago
lucasay 29 minutes ago
lamroger an hour ago
I started looking at Kaggle again and autoresearch seems to converge to many of the solution vibes there.
Wild ensembles, squeezing a bit of loss out. More engineering than research IMO
BrokenCogs an hour ago
Achiyacohen 14 minutes ago
Comment deletededwardsrobbie 19 minutes ago
Comment deletednadavdebi an hour ago