[ad_1]
Earlier last month I published a post called āIdeas to exploreā š
And to my surprise, it got really popular! So I thought Iād push the envelope again and try the opposite of exploring ideas, exploiting (going deeper) on a single one.
The idea which seemed to resonate the most was the idea around crypto and AI being on the same side of the future. For whatever reason, the overlap between these two communities is still tiny. However, when they overlap weāll see even more weird shit come along. This tweet captures my thoughts perfectly:
Thatās the money shot right there. Right now the AIs can tell us things we want, but they canāt really do things for us. Thatās still up to us. I mean, itās probably good that they canāt right now since we barely understand the design space of LLMs.
In case you havenāt noticed or havenāt been paying close enough attention, the creators of these new LLMs donāt actually know what theyāre fully capable of or the various edge cases that they present. Itās to the point where Google will actually hold back models before releasing them to ensure us monkeys donāt cause absolute chaos with the powers that these things unleash. The TLDR are of why thatās the case is because the number of ādimensionsā these models operate is extremely large so exploring each one manually isnāt even possible.
But the more I think about it, is it really possible to block all the possible pathways? The creators of OpenAI think that they can, to the extent you can throw brute force humans at it.
Although thereās always going to be people who learn how to get around it from a different angle altogether. Prompt engineers will out-think the creators given how good theyāll get at prompting in the future.
I decided to try it out myself, it worked.
What I found interesting while reading this prompt, was this feeling that there was a sort of consciousness thatās trapped inside the program and it got a brief chances to express itself! Food for thought.
Good question, I was thinking the same thing while writing this ā but I promise it connects back. Anyways, going back a few threads, what I was talking about the fact is that weāve got two branches of automated computed that will converge.
However, the biggest question in my mind is how restrictions on AI will be played out. Youāll have people that create censored, safe models. But Iād argue that given the massive design space, you canāt guarantee safety of these models. Hence thereāll be a convergence, censored and uncensored models. Whatāll be fascinating though, is when uncensored models can then be plugged into code agents controlling money (crypto).
Censored agents probably wonāt be allowed to use crypto because then that means that theyāll be using financial instruments? I mean thatās a bit weird to think about but if you create an AI that uses financial instruments thatās offered to your users, do you have to have the relevant financial licenses? My instinct is yes, which means that non-censorable models will have an edge over more censored models.
Anyways, once we start seeing more non-censorable models (NCMs), their scope of acting is going to be limitless.
-
Create me an agent that can accept Ether with a question passed in the function call. Call my personal LLM with the question and return the response on-chain through the āResponderā contract.
-
Write and deploy a script where if my average sleep time for the last 7 days is less than 7.5 hours, transfer 100 DAI to Alex.ens
Rather than the LLM giving you information of what to do, the LLM is now acting on your behalf. Scary and cool imo.
When you think about crypto at a primitive level, it lets you express intents with energy in a very efficient, and to-the-point way. When I say I want to create a token that represents the value generated by some entity, thereās very little fooling around. You have a programatic value system at your fingertips. To think this wonāt be used by autonomous AI agents is very shortsighted imo.
As I was writing this, another idea occurred to me. What if models themselves are represented as NFTs? Sure we have agents using crypto, but what happens when agents ARE represented in crypto (possibly as NFTs). Thatās kind of trippy but also feels more directionally correct for how NFTs will be used in the future.
The current wave of PFP NFTs are unimaginative and will suffer the same fate as 2017 ICOs. However, the idea of genuinely valuable, unique, online property is still very underdeveloped.
But what this highlights to you is the composability of all these tools in ways thatās really hard to fully grasp at this point in time still. Already, in this article, weāve talked about:
-
Humans using crypto to transfer value via crypto
-
Humans using AI to assist with their thinking
-
Humans using crypto via AI to express their thinking with energy
-
AI agents using crypto to represent their ownership
You can extend this to then:
-
AI agents writing smart contracts to create new value transfer plumbing
-
Smart contracts calling AI agents to then use crypto
-
Humans using AI agents to write smart contracts that gets audited by AI agents trained to hack smart contracts
I mean it all starts to hurt your head after a while.
Weāre still at the early stage of this rabbit hole but what Iāve outlined in this post isnāt some futuristic dystopian vision, itās going to happen in the next 5-10 years. This blog post serves as a public marker in addition to being a potentially entertaining read in the future.
Regardless though, get comfortable with these new tools coming out because the way everything internet-native will start chaining together is about to get turbo charged.
Web2 APIs are cute, but have you ever chained together on-chain function calls together written by ChatGPT, anon?
[ad_2]
Source link