Full circle from one laptop to rule them all to specialized function specific devices
- Get link
- X
- Other Apps
For about 25 years my digital life was wrapped tightly around whatever personal laptop I had. Since for most of that time I worked as a remote consultant (except for gigs at Singapore-based Ola Search, Google in Mountain View, and currently at Capital One in Urbana, Illinois) my personal laptop also covered work activities. There was something close and comforting about having one digital device that I relied on.
Digital life is very different now. Because of concerns about ‘always being online’ and not paying enough attention to the non-digital world, I favor just wearing an Apple Watch and leaving my iPhone at home. The Apple Watch is just adequate enough for phone calls, messaging, and on rare occasions email and is not anything I spend any real time paying attention to. I can spend the good part of a day shopping, walking in a park, eating out, or perusing books in a library and just spend a few minutes paying attention to my watch. A huge improvement to cellphone addiction!
For work, I have dedicated secure devices for getting my work done - the definition of purpose-specific.
For home use, I have a powerful GPU laptop from System76 that I only use for machine learning and experiments I am doing fusing ‘classic’ symbolic AI with functional components that are just wrappers for deep learning models.
Also for home use I have a MacBook that is primarily used for long writing sessions when I am working on a book project. Example code for my books tends to be short and pendantic so that development lives on the MacBook also.
I depend on my iPhone when I travel to stay organized and to have local copies of required digital assets, including on-device cached Netflix movies, Audible audio books, and Kindle books.
Lastly, the device that I spend more time on than any other (except for work devices) is my iPad on which I do close to 100% of my web browsing, almost all of my reading, enjoying entertainment, and lots of light weight writing like this blog post and editing and small additions to my current book project.
If I count all cloud-based compute infrastructure for work as one huge virtual device, the count for the digital devices I use every week weighs in at eight devices. When I retire from my job at Capital One later this spring that device count falls to five devices - still really different than the old days of having one laptop for everything.
Looking ahead to the future, perhaps only 5 or 10 years from now, I expect device profiles used by typical consumers to change a lot - mostly being one personal device that is always with you and then many different peripheral and possibly compute devices in your living and working environments that are shared with other people. I think there are three possibilities for what the one personal device may be:
Whatever the profile is for your personal digital device, it will securely be connected to all shared devices (e.g., smart TVs, shared keyboards and monitors, shared tablets of all sizes, smart cars, home entertainment centers, the cell phone network infrastructure, point of sale devices in stores, etc.).
Digital life is very different now. Because of concerns about ‘always being online’ and not paying enough attention to the non-digital world, I favor just wearing an Apple Watch and leaving my iPhone at home. The Apple Watch is just adequate enough for phone calls, messaging, and on rare occasions email and is not anything I spend any real time paying attention to. I can spend the good part of a day shopping, walking in a park, eating out, or perusing books in a library and just spend a few minutes paying attention to my watch. A huge improvement to cellphone addiction!
For work, I have dedicated secure devices for getting my work done - the definition of purpose-specific.
For home use, I have a powerful GPU laptop from System76 that I only use for machine learning and experiments I am doing fusing ‘classic’ symbolic AI with functional components that are just wrappers for deep learning models.
Also for home use I have a MacBook that is primarily used for long writing sessions when I am working on a book project. Example code for my books tends to be short and pendantic so that development lives on the MacBook also.
I depend on my iPhone when I travel to stay organized and to have local copies of required digital assets, including on-device cached Netflix movies, Audible audio books, and Kindle books.
Lastly, the device that I spend more time on than any other (except for work devices) is my iPad on which I do close to 100% of my web browsing, almost all of my reading, enjoying entertainment, and lots of light weight writing like this blog post and editing and small additions to my current book project.
If I count all cloud-based compute infrastructure for work as one huge virtual device, the count for the digital devices I use every week weighs in at eight devices. When I retire from my job at Capital One later this spring that device count falls to five devices - still really different than the old days of having one laptop for everything.
Looking ahead to the future, perhaps only 5 or 10 years from now, I expect device profiles used by typical consumers to change a lot - mostly being one personal device that is always with you and then many different peripheral and possibly compute devices in your living and working environments that are shared with other people. I think there are three possibilities for what the one personal device may be:
- A smartphone
- Something like an Apple Watch
- Something like a one-ear only AirPod like device
Whatever the profile is for your personal digital device, it will securely be connected to all shared devices (e.g., smart TVs, shared keyboards and monitors, shared tablets of all sizes, smart cars, home entertainment centers, the cell phone network infrastructure, point of sale devices in stores, etc.).
- Get link
- X
- Other Apps
Popular posts from this blog
I am moving back to the Google platform, less excited by what Apple is offering
I have been been playing with the Apple Intelligence beta’s in iPadOS and macOS and while I like the direction Apple is heading I am getting more use from Google’s Gemini, both for general analysis of very large input contexts, as well as effective integration my content in Gmail, Google Calendar, and Google Docs. While I find the latest Pixel phone to be compelling, I will stick with Apple hardware since I don’t want to take the time to move my data and general workflow to a Pixel phone. The iPhone is the strongest lock-in that Apple has on me because of the time investment to change. The main reason I am feeling less interested in the Apple ecosystem and platform is that I believe that our present day work flows are intimately wrapped up with the effective use of LLMs, and it is crazy to limit oneself to just one or two vendors. I rely on running local models on Ollama, super fast APIs from Groq (I love Groq for running most of the better open weight models), and other APIs from Mist...
AI update: The new Deepseek-R1 reasoning language model, Bytedance's Trae IDE, and my new book
I spent a few days experimenting with Cursor last week. Bytedance's Trae IDE is very similar and is currently free to use with Claude Sonnet 3.5 and GPT-4o: https://www.trae.ai/home I would like to use Trae with my own API accounts but currently Bytedance is paying for LLM costs. I have been experimenting with the qwen2.5 and qwen2.5-coder models that easily run on my M2Pro 32G Mac. For reasoning I have been going back to using OpenAI O1 and Claude Sonnet, but after my preliminary tests with Deepseek-R1, I feel like I can do most everything now on my personal computer. I am using: ollama run deepseek-r1:32b I recently published my new book “ Ollama in Action: Building Safe, Private AI with LLMs, Function Calling and Agents ” that can be read free online at https://leanpub.com/ollama/read
Getting closer to AGI? Google's NoteBookLM and Replit's AI Coding Agent
Putting "closer to AGI?" in a blog title might border on being clickbait, but I will argue that it is not! I have mostly earned my living in the field of AI since 1982 and I argue that the existence of better AI driven products and the accelerating rate of progress in research, that we are raising the bar on what we consider AGI to be. I have had my mind blown twice in the last week: Today I took the PDF for my book "Practical Artificial Intelligence Programming With Clojure ( you can read it free online here ) and used it to create a notebook in Google's NotebookLM and asked for a generated 8 minute podcast. This experimental app created a podcast with two people discussing my book accurately and showing wonderful knowledge of technology. If you want to listen to the audio track that Google's NotebookLM created, here is a link to the WAV audio file Last week I signed up for a one year plan on Replit.com after trying the web based IDE for Haskell and Python...