Personal AI Services in Use as of February 2025
Sharing some of the AI services currently in use
iOS Developer | WWDC 2019 Scholarship Winner
Stay hungry, stay foolish.
Sharing some of the AI services currently in use
In our daily use of large models, we typically interact with them through dialogue. Their outputs are always presented to us in natural language. However, in certain scenarios, we prefer to obtain structured outputs. Is it possible to leverage the 'intelligence' of large models to create a smart interface that can output any data we desire in a structured format?
When using large models, besides evaluating the model's performance, price (cost) is also a crucial parameter. Have you noticed that in the pricing of large model APIs, input costs are divided into two categories cache hit and cache miss? When there is a cache hit, the price is cheaper. Moreover, hitting the cache can also reduce the overall time consumption. How does this work?
At the beginning of the year, deepseek launched the r1 model, and OpenAI subsequently released the o3-mini model. After briefly reviewing the deepseek-r1 paper, I have some thoughts and questions about reasoning models.
Some Thoughts on AI Coding
A talk by the creator of the 3Blue1Brown channel
How to Find What You Want to Do
React introduced hooks in functional components, and one of the most important and frequently used hooks is useEffect. Many students use this hook as a 'data listener,' performing certain actions when dependencies change. Is this the correct usage? Why is this hook called 'effect' instead of 'listener'? What does 'effect' signify? This article will provide the answers.
WWDC 2024 - Understanding Swift Testing