Large language models (LLMs) are a type of artificial intelligence (AI) that are trained on massive datasets of text and code. This allows them to perform a variety of tasks, such as generating text, translating languages, and writing different kinds of creative content. In the last year, LLMs have become increasingly popular, with Open-AI’s GPT-3 model opening up to the public and becoming one of the world’s fastest-growing applications.
While one could be mistaken for thinking ChatGPT is LLM, there are a host of competitors available, some closed source; others are open-source; LLMs like Bard by Google and LLaMA by Meta are also keen on exploring the possibilities of this new technology and leverage their own datasets, scrappers, weightings and refinements to improve their offerings.
LLMs are out of the bag now, and as they make their way to more people imputing prompts and providing information, these models provide value to users and refine over time. What might seem like a clunky gimmick right now, and only able to solve certain niche problems could become a general purpose tool that replaces a lot of the software and services we use today.
While the future of LLMs is anyone’s guess, the ones we have now are growing in use as the early adopters rush in to play with these new tools. There are a number of reasons why LLMs have become so popular.
- First, they can access and process vast amounts of information, allowing them to learn and adapt to new situations quickly and perform various functions, from writing content, summarising content, translating content or providing research services and even coding and debugging.
- Second, they can generate text that is indistinguishable from human-written text, making them ideal for various applications, such as chatbots, customer service, and marketing.
- Third, they are constantly being updated and improved, which means their capabilities constantly expand.
Popular LLMs are centralised and subsidized by big tech
The most popular LLMs that you interact with today are currently centralised and controlled by a small number of big tech companies, with very few people running LLMs on their own devices. Most users are not yet that committed to the technology and are mostly tourists who would rather these models run on a cloud server somewhere where they can pop in when needed.
This means that these companies have a great deal of power over how these models are used and who has access to them. Additionally, the development and training of LLMs are costly, which means that these companies can subsidise the cost of these models by using their vast financial resources. Companies like Microsoft and Google are willing to burn cash on these endeavours now to refine their models and build a customer base which they can later monetise once user entrenchment reaches a point where it’s sticky enough that a certain price point won’t chase away users.
This centralisation of power raises several concerns.
- First, it could lead to the misuse of LLMs for malicious purposes like spreading misinformation.
- Second, it could unfairly favour big tech companies over smaller businesses and startups and drive massive consolidation and even larger monopolies.
- Third, it could make it difficult for users to control their own data and privacy.
The lack of a clear path toward monetisation
One of the biggest challenges facing LLMs is the need for a clear path toward monetisation. While these models can generate various valuable products and services, it still needs to be made clear how these products and services can be sold profitably and sustainably globally.
Businesses building on LLMs would either need to build their product into a larger LLM provider like Open-Ai and trust they will remain a market leader, take the risk of building their own models in isolation, or try to figure out how to build an AI that leverages popular modelling software like GPT, LLaMA and BARD.
In addition, these companies would need to target a certain niche; for example, an LLM that only does CRM customer surveys would be popular worldwide, but getting it into the hands of businesses across the globe in a way that is cost-effective is more complex. While the product is completely digital and dematerialised, monetisation via fiat rails carries risk, dealing with a host of custodians and payment processors as well as forex and clearing risk.
This lack of a clear path toward monetisation has made it difficult for investors to support the development of LLMs. As a result, the field of LLM research is centred around certain early adopters and tech investors and has yet to attract traditional investment to fund expansion.
Once an LLM company can clearly show user growth, user retention, regular income and customer lifetime value, you’ll see traditional investors flow in to fund these companies because they can now rely on metrics they understand.
The Future of Large Language Models
The future of large language models is still being determined despite their rampant popularity. However, several factors suggest that these models will continue to play an increasingly important role in our lives.
- First, the cost of training and deploying LLMs is likely to continue to decrease. This will make it possible for more businesses and organisations to use these models, which will lead to the development of new and innovative applications.
- Second, the capabilities of LLMs are constantly expanding. As these models become more powerful, they will be able to perform a wider range of tasks. This will make them even more valuable to businesses and organisations.
- Finally, the development of LangChain agents could help to address some of the concerns about the centralisation and misuse of LLMs. By making these models more decentralised and transparent, LangChain agents could help to ensure that they are used for good and not for harm.
What Are LangChain Agents?
While LLMs have been useful in general application, it doesn’t always have the data sets, weighting and user feedback to cater to every niche. As LLM models begin to proliferate, different companies or private individuals are refining them for different use cases. These refined models don’t interact with one another and often live in isolation, which limits their reach.
Expecting users to sign up for every LLM model to process certain queries is a non-starter, so there has to be a way of bridging different models together, and this is where LangChain comes in, and no, it’s not a blockchain, and it doesn’t have a token, so you can relax.
The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs. Chains may consist of multiple components from several modules. Chains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, many integrations with other tools, and end-to-end chains for common applications.
LangChain agents will make it easier for LLMs to talk to one another, leverage expertise and training from a global marketplace and rapidly drive down the cost of AI learning. LangChain ensures that you don’t have to reinvent the wheel on certain training but rather requests and responses from a marketplace of models, but It does have one drawback.
Money can’t flow as fast as API calls between AI models, at least not the money we’re accustomed to. However, if a network could instantly settle payments globally in a unit of account recognised worldwide and be programmable to fire off on certain requests, that could be a game changer.
That payment rail could be coupled with LangChain to unlock an entirely new AI request and response marketplace that is monetised with real-time payments, and this is where LangChainBitcoin puts up its hand as a possible solution.
What Is LangChainBitcoin?
The LangChainBitcoin package has two main features:
- LLM Agent BitcoinTools: Using the newly available Open AP GPT-3/4 function calls and the built-in set of abstractions for tools in LangChain, users can create agents that are capable of holding Bitcoin balance (on-chain and on LN), sending/receiving Bitcoin on LN, and also generally interacting with a Lightning node (LND).
- L402 HTTP API Traversal: LangChainL402 is a Python project that enables users of the requests package to navigate APIs that require L402-based authentication easily. This project also includes a LangChain APIChain-compatible wrapper that enables LangChain agents to interact with APIs that require L402 and Macaroons for payment or authentication. This enables agents to access real-world resources behind a Lightning-metered API.
This means that anyone creating an LLM locally, providing data to an LLM or providing assistance in a response can paywall this information and allow larger LLMs and their customers to access it with a micropayment.
A business or individual can sell a prompt by gating access to an API capable of responding to queries, while potential buyers can then ask their own local agent to evaluate the response given a set of criteria. If the agent approves of the response, then further responses can be purchased.
On the customer side, they can use their preferred LLM model and connect a Lightning wallet to it, to pay for requests that are outside the remit of their current service providers toolset or if they want to compare prompt responses from different models.
How does this tie into the Lightning Network?
If the Lightning Network becomes the defacto backbone of settling payments between different AI models and their customers worldwide.
These payments are going to require more liquidity and routing paths, which would encourage AI companies to spin up nodes, while ordinary Lightning node runners could help out by creating paths between popular Lightning wallets and the various AI Lightning nodes.
By monetising API calls with satoshis on the Lightning Network, you now have a method of making micropayments instantly back and forth between the various databases, suppliers, models and customers, regardless of where they are situated in the world. This new demand for constant micropayments would also generate more fees on the Lightning Network and help make routing nodes a more attractive business practice.
Do your own research.
If you want to learn more about LangChain on Bitcoin, use this article as a jumping-off point and don’t trust what we say as the final say. Take the time to research, check out their official resources below or review other articles and videos tackling the topic.
- AI for All: Powering APIs and Large Language Models with Lightning
- GitHub – LangChainBitcoin
- GitHub – LangChain
Are you a Bitcoin and Lightning fan?
Have you been using Lightning to make micro-payments? Stream sats or engage with apps? Which app is your favourite? Do you run a Lightning node? Have you tried all the forms of Lightning payments? Which one do you prefer?
Let us know in the comments down below.