|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Not too long ago, it wasn't possible for LLMs (Large Language Models) to rewrite code. Each LLM has a token limit, which establishes how many words it can absorb and apply. With lower token limits, the models are unable to absorb the amount of information required to perform complex tasks like code conversions.
Software development company Mantle recently faced a common challenge: they had built a next-generation equity management platform prototype in a specific coding language that was perfect for speedy interaction in response to feedback from customers.
However, the code used in their production tech stack was different, and to ship the product, Mantle would need to convert the codebase from one language to another. This is a notoriously onerous task that is regularly faced by software teams and enterprises.
“The effort is justified, but the process is painful,” said Dwayne Forde, Mantle co-founder and CTO. “Instead of moving a customer-facing roadmap forward, you are now going to spend a significant portion of valuable engineering time recreating existing functionality.”
Wondering if AI could help, Forde—a trusted industry leader with more than 20 years of engineering experience in roles with companies including VMware and Xtreme Labs—chronicled the process recently in a blog post on Mantle called “Working with AI: Code Conversion.”
He hopes the case study will serve as a useful resource to other tech teams, helping them save time and effort.
It is the second in a series of instructional guides Forde has written for technical teams, as part of an effort to advance the collective interests of the sector by showing how AI can accelerate and enhance their work.
“Our goal wasn’t to achieve 100% perfectly crafted code,” Forde noted. “The goal was to get 80% of the boilerplate and repeated patterns out of the way so that engineers could focus on high-value validation and verification and we could ship the product.”
Not too long ago, it wasn’t possible for LLMs (Large Language Models) to rewrite code. Each LLM has a token limit, which establishes how many words it can absorb and apply. With lower token limits, the models are unable to absorb the amount of information required to perform complex tasks like code conversions.
But with rapid advancements in LLM software came higher token limits, and Forde realized his team had exciting new options in front of them. Higher limits meant that models could increase their reasoning, perform more complex math and inference, and input and output context in dramatically larger sizes.
One million tokens means, according to Medium, that a model can do the equivalent of reading 20 novels or 1000 legal case briefs.
Forde and his team understood that this dramatically larger token limit would allow them to feed entire coding languages into an LLM, essentially teaching it to be bilingual.
Because converting code is extremely labour-intensive, Mantle knew that having an LLM convert even small amounts of code from one language to another would be hugely beneficial to the delivery time of the engineering project.
“We developed an approach that reduced the scope of work by two-thirds and saved months of developer time,” Forde wrote in his post.
Converting the Mantle prototype project into a new code language would have normally taken months of manual labour.
Instead, Forde said his engineers focused their time experimenting with how to best prompt an LLM to do much of the work for them.
It wasn’t just as simple as feeding the code languages into the LLM and asking it to translate.
Under Forde’s watch, the Mantle team went through a process of innovation and discovery to figure out the best instructions, context and guidance to provide the LLM in its work.
They fed the model code snippets from their prototype source language, as well as existing production code patterns, descriptions of their target architecture, and provided the LLM with context about specific libraries and utilities used in Mantle’s own tech stack.
“We have certain libraries that we prefer, so adding a section of context was very helpful to make sure the LLM output code was compatible with what we use,” said Forde.
The team even fed the LLM screenshots to demonstrate how they wanted the information to be presented, something that would not be obvious to AI from the code language alone.
“Screenshots of the existing application give the LLM a visual layout of the application,” said Forde. “The context and direction you provide don’t have to be all verbal. You can use visual reference points as well to get the output you’re after.”
In his blog post, Forde breaks down the step-by-step process Mantle used to convert their code. The process is innovative, iterative and – at times – playful.
At one point, the Mantle team instructed the LLM to “act like a software engineer who could only answer in source code.”
The Mantle team asked the LLM to convert only small sections of code at a time, checked its work, corrected any misinterpretations, and then moved on.
The step-by-step experimentation allowed the Mantle team to refine and improve its work over time, and create an effective process that can now be replicated in future projects.
“Once the file was generated, our team either reviewed and adjusted the output manually or adjusted the
Disclaimer:info@kdj.com
The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!
If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.
-
- Dogecoin Creator Billy Markus Tells the Community the Specific Code to Change to Get Rid of the Inflation, before Politely Asking the Community to Stop Bothering Him About It
- Nov 14, 2024 at 08:25 pm
- There's long been consternation in the Dogecoin community over the coin's inflationary tokenomics — billions of DOGE are added to the circulating supply every year
-
- These Are Three of the Best Cryptocurrencies to Buy Right Now Before They Skyrocket
- Nov 14, 2024 at 08:25 pm
- Bitcoin (CRYPTO: BTC) leading the way by setting a new all-time high of over $88,000. This rally has ignited excitement across the board, driving up prices and drawing fresh interest in crypto.
-
- PEPE Coin Price Challenging The $0.00002387 Resistance After Coinbase Listing, But Will FOMO Buying Its L2 Challenger PEPU Steal Its Limelight?
- Nov 14, 2024 at 08:25 pm
- The Pepe coin price went ballistic, surging over 73% in the last 24 hours to trade at $0.00002261 as of 4:38 a.m. EST as a buyer frenzy erupts around the meme coin.
-
- Ripple's XRP and JetBolt Steal the Spotlight in a Market Buzzing with Movement
- Nov 14, 2024 at 08:25 pm
- In a market buzzing with movement, Ripple's XRP and JetBolt are grabbing the spotlight for different reasons—XRP with its bold price rally and ongoing quest to reach $1, and JetBolt with its innovative zero-gas technology and impressive presale success.