Data extraction is a use case that fine-tuned models are fantastic at, so I'm not surprised that OP got good results. That said, I've also found it's pretty easy to beat GPT-4 across many task types if you have a way of getting strong training data. We published some research[1] a week ago where we found that across 4 example tasks spanning creative summarization, question answering, data extraction and classification a fine-tuned Llama 3 8B was able to outperform GPT-4 on 3 of them. The key was to create a repeatable way of generating high-quality training data, which is also addressed in the post.
My use case would be fine tuning on technical docs. Specific news, 2 years of blog posts, primary source material, and Twitter explainer thread. I want to gather all the niche information of a topic from the last two years, dump it into this and have an LLM that is a subject-matter expert.
Your use case is better suited to RAG. This is where you retrieve data from a large dataset and inject it into the user's request so the AI model has the context it needs to answer accurately.
But that's not a silver bullet and you would need to spend significant time on chunking strategy and ranking of results to hopefully get a decent response accuracy.
See LangGraph's "conditional edges" concept here: https://langchain-ai.github.io/langgraph/concepts/low_level/...
You can see how that "routing function" could include a call to a "Router LLM." And yes, fine tuning is a great method to better improve the routing intelligence of said Router LLM.
Great question btw!
I suppose the Output could be washed by publishing it on the web and having another entity crawl it.
OpenAI doesn't treat anyone else's content any differently, acting like it's a fair game, so why should we care.
Personally, my PhD did fine grained ACE-like event and sentiment extraction and "small" specialized finetuned transformers outperformed prompting LLMs like BERT and Roberta-large. Would love to see an inclusion of small model scores with some sota pipelines.
This is great work anyway even if it replicates known results!
https://www.threads.net/@ethan_mollick/post/C46AfItO8RS?hl=e...
For many extractive tasks BloombergGPT was quite disappointing. A 5-10% performance hit with much larger inference cost compared to smaller models is not desirable.
But the research investment for Bloomberg makes sense to take the risk: a do-it-all generative model can mean significant complexity reduction in maintenance and deployment overhead.
It didn't directly pay off for many extractive tasks, but I bet they're iterating. Bloomberg has the data moat and the business needs in their core products to make it worthwhile.
LLM approaches were evaluated on my own time and but published (I left research after obtaining my PhD).
Excluding the part in the middle because I don't wanna repost potential issues for you. I just wanted to comment that that is terrible. People often talk about the siloed nature of research in industry, without considering that academia supports the draconian publishing system. I understand IP protection, but IP protection doesn't have to mean no access. This is such a huge issue in the bio- world (biostats, genetics, etc).
I think this is default policy for thesis based on publication agreements here.
In any case, I am not too worried.
I have skimmed through it and it's truly amazing how good annotation of the dataset can lead to impressive results.
I apologise in advance if the question seems ignorant: The blog post talked about fine-tuning models online. Given that BERT models can run comfortably on even iPhone hardware, were you able to finetune your models locally or did you have to do it online too? If so, are there any products that you recommend?
I doubt you can fine-tune BERT-large on a phone. A quantized, inference optimised pipeline can be leaps and bounds more efficient and is not comparable with the huggingface training pipelines on full models I did at the time. For non-adapter based training you're going to need GPUs ideally.
I am pretty sure that a finetuned smaller model would be better and faster for this task. It would be great to start finetuning and sharing such smaller models: they do not really have to be really better than commercial LLMs that run online, as long as they are not at least worse. They are already much faster and cheaper, which is a big advantage for this purpose. There is already need for these tasks to be offline when one cannot share the data with openai and the like. Higher speed and lower cost also allow for more experimentation with more specific finetuning and prompts, with less care about token lengths of prompts and cost. This is an application where smaller, locally run, finetunable models can shine.
I fully agree. I realized this early on when experimenting with GPT-3 for web data extraction. After posting the first prototype on Reddit and HN, we started seeing a lot of demand for automating rule-based web scraping stacks (lots of maintenance, hard to scale). This eventually led to the creation of our startup (https://kadoa.com) focused on automating this "boring and hard" problem.
It comes down to such relatively unexciting use cases where AI adds the most value.
AI won't eliminate our jobs, but it will automate tedious, repetitive work such as web scraping, form filling, and data entry.
They didn't make the (incorrect) statement that no other serious, useful application exists.
But that's how it reads when you cut off before "I have actually engaged in for real work and found useful"
Still good to see someone walk through their fine tuning process, with a mix of hosted and local options.
They use a much simpler model, fine tune it, and manage to beat a way more advanced model
In that sense it's not that surprising that on a pure text extraction task with little "thinking" required a 7B model does well and outperforms other models after fine tuning. In the "noshotsfired" label GPT-4 is even accused of overthinking it.
It is interesting how finetuned mistral-7b and llama3-7b outperform finetuned gpt3.5-turbo. I would tend to attribute that to those models being newer and "more advanced" despite their low parameter count, but maybe that's interpreting too much into a small score difference.
A fine tuned 500b parameter model would probably beat the fine tuned 7b model, but only by a bit (depending on task obviously). A lot of that capacity is being used for knowledge, and isn’t needed for extraction/classification tasks. Fine tuning isn’t touching most of those weights. The smaller models need to focus on more general language skills, not answering “describe the evolution of France’s economy in the 1800s”.
I would say this actually invalidates the whole thing.
2. It would be nice to try again with 0 temperature, as I do a lot of structured data extraction. In my experience 0 temperature should always be used, and it can make a huge difference. Temperature of 1 essentially means that it will start to pick tokens with lower probability of being accurate...
404 Content moderation response in 4% of articles. This is just financial news text.
It is a prime reason we are considering open models.
I am not sure. I disagree. If there is a pro-chatGPT user, I'm probably it.
Ive often seen it give significantly less effort to answer the question.
Seems to me then, priority one should be "free and open source all the models as hard as possible, so that EVERYONE can fine-tune."
(This being a subset of the idea of, free / open source is generally preferable for both freedom and quality)
I despise the centralization of this tech as well, and while it’s hopeful that smaller fine tuned models are better, they won’t win (or barely stand a chance) out of the virtue of openness and privacy alone. Best we can hope for is proliferation in the small-medium sized business service space - that OpenAI tokens are not worth the extra expense if open models are commoditized and effective. This was probably Zuck's plan all along – to prevent centralized gate keepers in tech that’s mainly benefiting his rivals. But the enemy of my enemy is my friend, so his actions may be the best he’s ever done for the public good.
I think your first one is getting downvoted hard because your first sentence is not at all how any of this works.
Sucking down personal data isn't JUST a bad idea for privacy, it's actually also bad for "making the best products," I think you're overstating the extent to which all that data that is stolen and sold to the highest bidder actually helps the company buying it?
> data that is stolen and sold to the highest bidder
Didn’t mean necessarily the data brokers (although that’s an interesting angle), but say Apple now has a bunch of info about your calendar, email, contacts, then clearly they have an upper hand in providing better products than an anonymous API call. Not all products need personalization but LLMs? I can think of tons of use cases.
85% of the time they beat GPT-4.
You can see the results here: https://predibase.com/fine-tuning-index.
The site has a series of interactive charts and a link to our Arxiv paper.
Why is this one labelled with start_date: 2011-02-07?
> Afghan, Coalition Forces Clear Northern Kandahar ISAF Joint Command - Afghanistan 2011-02-D-081 For Immediate Release KABUL, Afghanistan (Feb. 12) – Afghan and coalition forces set out to provide security and assist the local population during a clearing operation in a remote village in Shah Wali Kot district, Kandahar province, Feb. 8. District Chief of Police Bacha Khan, and his policemen; Afghan commandos from 2nd Company, 3rd Commando Kandak, along with U.S. service members from Special Operations Task Force – South, searched the village throughout the day and detained 20 suspected insurgents. Also found were 80 pounds (36 kilograms) of homemade explosives and various improvised explosive device-making materials. Leading a squad during the operation was Afghan commando Sgt. Hafiz Rahman, who said this operation has shown him progress. “The people are respecting us,” Rahman said. “They ask us if we want tea, or ‘do we want bread?’ They are thankful for the security.” Children during the operation brought commandos blankets in the evening and offered them food throughout the day.
Trying to find the source, I'm also not seeing any indication of Feb 7.
https://www.dvidshub.net/news/65238/afghan-police-commandos-...
---------------
And why is this labelled as Mar 6, GPT-4o and I personally find Mar 7 to be logical.
ISAF Joint Command Morning Operational Update, March 8, 2011 ISAF Joint Command - Afghanistan 2011-03-S-022 For Immediate Release KABUL, Afghanistan (March 8, 2011) Afghan and coalition forces targeted a Taliban district chief, killed one insurgent and detained several others during an operation in Burkah district, Baghlan province, yesterday. The Taliban district chief maintains ties to Taliban senior leadership throughout Kunduz, Baghlan, and Takhar provinces. He is involved in purchasing weapons and IEDs. Intelligence reports led the security force to the targeted compound in the city, where Afghan forces called for all occupants to exit the buildings peacefully before conducting a search. During that time, an armed individual threatened the security force and the force returned fire, killing him. Several suspected insurgents were detained after initial questioning at the scene.
But despite that the "finetuned" model also gets Mar 6. How does the finetuned model get Mar 6?
The hype is really getting tiresome. There is no way to get from here to any intelligent system with the current techniques. New breakthroughs will require insights into discrete spaces which are not amenable to curve fitting with gradient descent.
The Claude models all have a 200,000 token limit and respond _really_ well to examples - you can feed them in as chat JSON message pairs of user input / ideal assistant output.
Haiku is dirt cheap for this kind of thing and with 200,000 tokens you can probably provide a dozen or so examples.
https://huggingface.co/datasets/strickvl/isafpressreleases_t...
but when looking for rows where GPT-4o was deemed inaccurate then to me it seems the label was wrong or at least it wasn't possible to infer that certain label from the input text. But finetuned model was able to predict it.
Which makes me wonder whether the finetuned models are poisoned with eval data...
See this one:
> ISAF Joint Command Morning Operational Update, March 8, 2011 ISAF Joint Command - Afghanistan 2011-03-S-022 For Immediate Release KABUL, Afghanistan (March 8, 2011) Afghan and coalition forces targeted a Taliban district chief, killed one insurgent and detained several others during an operation in Burkah district, Baghlan province, yesterday. The Taliban district chief maintains ties to Taliban senior leadership throughout Kunduz, Baghlan, and Takhar provinces. He is involved in purchasing weapons and IEDs. Intelligence reports led the security force to the targeted compound in the city, where Afghan forces called for all occupants to exit the buildings peacefully before conducting a search. During that time, an armed individual threatened the security force and the force returned fire, killing him. Several suspected insurgents were detained after initial questioning at the scene.
It claims "Yesterday" on March 8, so you would assume March 7 is correct start_date, but it's labelled Mar 6, and finetuned models get it "right", while GPT says Mar 7.
---
Test 1: KABUL, Afghanistan (Jan. 25, 2013) During a security operation in Andar district, Ghazni province, yesterday, an Afghan and coalition force killed the Taliban leader, Alaudin. Alaudin oversaw a group of insurgents responsible for conducting remote-controlled improvised explosive device and small-arms fire attacks against Afghan and coalition forces. Prior to his death, Alaudin was planning attacks against Afghan National Police in Ghazni province.
Train: KABUL, Afghanistan (Jan. 8, 2013) – During a security operation in Washer district, Helmand province, yesterday, an Afghan and coalition force killed the Taliban leader, Mohammad Sayed, and one other insurgent. Mohammad Sayed distributed weapons and ammunition to Taliban fighters. Prior to his death, Sayed was attempting to acquire rockets for attacks targeting Afghan government officials in the province.
---
Test 2: For Immediate Release
KABUL, Afghanistan (Aug. 6, 2012) Afghan and coalition forces conducted a security operation in search of a Haqqani leader in Tsamkani district, Paktiya province, yesterday. During the operation the security force engaged a group of insurgents with a precision airstrike. After the strike, the Afghan and coalition security force conducted a follow-on assessment and confirmed several insurgents had been killed in the strike. They also confirmed the strike had not injured any civilians or damaged any civilian property.
Train: For Immediate Release
KABUL, Afghanistan (July 22, 2012) — Afghan and coalition forces conducted a security operation in Muhammad Aghah district, Logar province, Saturday.
During the operation, a group of armed insurgents were engaged with a precision airstrike. After the strike, the Afghan and coalition force conducted a follow-on assessment and confirmed multiple insurgents had been killed.
The security force also confirmed the airstrike had not injured any civilians or damaged civilian property.
---
Test 3: ISAF Joint Command Morning Operational Update March 24, 2011 ISAF Joint Command - Afghanistan 2011-03-S-081 For Immediate Release KABUL, Afghanistan (March 24, 2011) A separate Afghan and coalition security force targeted a Taliban IED cell leader in Kandahar today. The leader is responsible for planning, preparing and executing explosive-device attacks on Afghan civilians, Afghan and coalition security forces. The joint security force targeted the leader’s suspected compound in Kandahar City based on tips from citizens. The security team contained the area and detained several suspected insurgents. There were no shots fired and no damage done to the targeted compound.
Train: ISAF Joint Command Operational Update Dec. 22 ISAF Joint Command - Afghanistan 2010-12-S-267 2699, 2935, 3022, 3078 For Immediate Release Download PDF KABUL, Afghanistan (Dec. 22) – Several insurgents were killed by Afghan National Security and International Security Assistance Forces in separate clearing operations in southern Afghanistan over the last 24 hours. An Afghan Army and ISAF patrol spotted some insurgents emplacing an improvised explosive device in Sangin district, Helmand province today. After gaining positive identification, combined forces engaged the enemy position, killing two insurgents.
So very misleading title
Eh, I can see that, but to me "finetuned model" pretty strongly implies some specific task