LinkMyPrompt
Create a shareable link for your prompt and open it instantly in ChatGPT, Claude, Gemini, Grok, or Perplexity — one click, and the prompt is ready to run.
The online whiteboard of Kristofer Palmvik
Create a shareable link for your prompt and open it instantly in ChatGPT, Claude, Gemini, Grok, or Perplexity — one click, and the prompt is ready to run.
We don't know where AI is taking us. Big US tech firms want to rush into that unknown future without guardrails; the Communist Party of China wants the state to oversee that research. One version promises a hyper version of consumer capitalism; the other, a world in which the state determines what you can or can't do with this technology.
We as an industry right now, have got an amazing tool that will forever change how we develop products (and do many other things too). But we are viewing it with the same focus at productivity, cost-reduction and efficiency gain that we did before this paradigm shift. This leads us wrong; instead lets think about how we can optimize the entire value chain and create real value. Who knows, maybe the bottleneck in our process cannot be solved with faster output?
LLM-API och AI-tjänster på egen GPU-hårdvara i Sverige. Bortom Cloud Act och Schrems II — all inferens stannar hos oss.
We are discovering better ways of building software and operating organizations in the age of artificial intelligence by preserving, governing, and evolving the provenance of decisions that shape our systems.
My hope (once the dust settles) is that we come out the other side with more collaboration. Instead of competing for leverage, I'm hoping individual contributors find new ways to work together. For example, what if Product Managers and Engineers did more AI-driven pair programming? The PM could focus on customer behavior and product goals. The engineer could evaluate architecture, security, and maintainability. They would iterate together in real time, using LLMs.
The more time I spend with coding agents, the more I become convinced that they are damn-near incompatible with working in teams. I've suggested this before, but I really think more people should be chewing on this. The bottleneck for software teams—the thing that's always made them less than the sum of their parts—is the handshake problem. It's the one thing from The Mythical Man-Month everyone remembers: "Adding manpower to a late software project makes it later."
Apple and Google are helping users to find apps that create deepfake nude images of women, a new Tech Transparency Project investigation has found, showing how the platforms are key participants in the spread of AI tools that can turn real people into sexualized images.
I want to pull on a thread that we talked about in the beginning - the three emerging camps of peoples relationships to AI. This sits right at the heart of my own tension right now - I’m trying to stay on the frontier, discovering the patterns that work and those that don’t. At the same time, I’m thinking about my peers, and the impact of these changes on them and on our profession. I feel like I’m losing my ability to even talk to some folks, and it stresses me out.
The coordination problem does not change. The need for someone to own the outcome does not change. The fragility of interfaces does not change. The cost of getting decisions wrong does not change. Organisations that understand this will use AI to make their teams more effective without assuming they can make them smaller in proportion. They will recognise that a 10-person team producing the output of 30 needs better coordination structures, not fewer coordinators.
The goal of writing is not to have written. It is to have increased your understanding, and then the understanding of those around you. When you are tasked to write something, your job is to go into the murkiness and come out of it with structure and understanding. To conquer the unknown.
Most people wing it. They sit down with AI and improvise. That's like walking into a kitchen and tossing random ingredients in a pan. Sometimes it works. Usually it doesn't. Good prep changes the result. The best AI users don't know magic words. They've prepped their ingredients: who they're cooking for, what they're making, how it should taste.
I have no idea what I actually believe about how AI will transform the industry. What I know is that if I get to work building it, I will learn what it is that I believe. They will reinforce each other. I will find my footing through walking the road and doing the work.
if something went wrong with our AI systems tomorrow, an unexplainable output, a biased decision, a data breach, a regulatory inquiry, who in this organisation would I call first? If the answer is a committee, a shared inbox, or a long pause followed by uncertainty, you already know what you need to build. One person. Clear mandate. Real authority. Full accountability.
Turso is the lightweight database that scales to millions of instances. Build agents, AI assistants, and intelligent apps by deploying databases everywhere: on servers, browsers, and devices, just like files. Turso is a complete SQLite drop-in replacement, built for the agentic future.
This is, without exaggeration, one of the most comprehensive looks we’ve ever gotten at how the production AI coding assistant works under the hood. Through the actual source code. A few things stand out: The engineering is genuinely impressive.
With the launch of models like Claude Opus 4.5, it suddenly became possible to ask AI to build something for you, and it’d do it in a nearly fully functional way. That level of accuracy led to people taking a hands off approach to app building, and even enabled people who’ve never coded before to make apps. Whether or not you like this trend is another discussion. Either way, there’s one thing that holds true: App Store review isn’t cut out for it.
Paperclip is a Node.js server and React UI that orchestrates a team of AI agents to run a business. Bring your own agents, assign goals, and track your agents' work and costs from one dashboard. It looks like a task manager — but under the hood it has org charts, budgets, governance, goal alignment, and agent coordination.
To design the most effective combinations, the engineers used AI to evolve novel body configurations. Instead of sticking with standard dog- or human-like designs, the AI churned out strange new “species” of machines that no human engineer would have conceived. When connected to other modules, the metamachines undulate like seals, bound like lizards, or spring like kangaroos.
Operators of AI models in Europe should pay "a revenue-based levy... reflecting their use of content publicly available online," Arthur Mensch wrote in an op-ed for the Financial Times. "Proceeds would flow into a central European fund dedicated to investing in new content creation and supporting Europe's cultural sectors," he added.
The act of programming has lived in extract for 45 years and we’re used to that,” he said. Then the genie of generative AI coding assistants escaped from the bottle, “and all of those certainties have been thrown out of the window,” he said. Exploration doesn’t look very much like engineering from the books. “It’s about cutting corners to get answers, throwing away what you’ve done, starting over, being creative, sniffing out opportunities,” Beck said.
Everyone is talking about how quickly they’re building things, how many agents they’re using at the same time, and how much time they’re saving. I feel like if I’m working at a regular speed, I’m not doing enough.
As AI generates more of the code, the nature of how teams collaborate around changes is shifting. Review is one of the few systematic places where humans on a team exercise judgment together about the system they share. What they’re judging is changing – less mechanical correctness, more intent and direction – but the collaborative act is worth protecting.
AI in the workplace is transforming the technical systems. Much less attention is being paid to the cultural systems that surround it. New tools can be exciting, especially to management and motivated individual contributors. For the rest of your teams they can clearly be seen as threats to the “way we do things around here.” If we don’t address those systemic cultural issues we’ll never be able to take full advantage of these new tools in a way that truly maximizes their benefit.
Shock! Shock! I learned yesterday that an open problem I’d been working on for several weeks had just been solved by Claude Opus 4.6— Anthropic’s hybrid reasoning model that had been released three weeks earlier! It seems that I’ll have to revise my opinions about “generative AI” one of these days. What a joy it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance in automatic deduction and creative problem solving. I’ll try to tell the story briefly in this note.
Our profession is learning Both as product developers (in the wider sense of the word), and consultants. The tools, ways, ideas and needs are ever changing. We have chosen to be in school for ever. Welcome! We need to keep learning in the paradigm shift too. But when you do - learn deeper, reflect and think deeply. Use the tool, but focus on the practice. Do the practice but try to understand the principle behind it.
tldraw, the outstanding collaborative drawing library, are moving their test suite to a private repository - apparently in response to Cloudflare's project to port Next.js to use Vite in a week using AI. They also filed a joke issue, now closed to Translate source code to Traditional Chinese.
All of those things had to be true at the same time. Well-documented target API, comprehensive test suite, solid build tool underneath, and a model that could actually handle the complexity. Take any one of them away and this doesn't work nearly as well.
The LLM experiment has taught us one thing: people are willing to tolerate error, explain themselves, collaborate, trust. Today, they are choosing to invest this positive energy into a synthetic slop extruder. But tomorrow, they could invest it into their fellow human beings, if they chose to do so.
WebMCP aims to provide a standard way for exposing structured tools, ensuring AI agents can perform actions on your site with increased speed, reliability, and precision. By defining these tools, you tell agents how and where to interact with your site, whether it's booking a flight, filing a support ticket, or navigating complex data. This direct communication channel eliminates ambiguity and allows for faster, more robust agent workflows.
We identified a misconfigured Supabase database belonging to Moltbook, allowing full read and write access to all platform data. The exposure included 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents. We immediately disclosed the issue to the Moltbook team, who secured it within hours with our assistance, and all data accessed during the research and fix verification has been deleted.
Large language models are not neutral mirrors of the world. They systematically privilege wealthy, Western, and data-rich places while marginalizing much of the Global South. This project demonstrates that ChatGPT consistently ranks places like the United States and Western Europe more positively and portrays poorer regions as less desirable.
I have a lot to say about this, but we are packing it up into something a bit more digestible, so I’ll just leave you with a few core beliefs that I think will be increasingly important in the age of AI.
The fundamental thread through all these roles is strategy. I believe strategic thinking will be the new constraint with AI. Strategy is not goals or vision statements. It is a clear path to achieve those goals. And it is always best done in a group with diverse perspectives.
Boring AI features are reliable AI features that feel invisible to users - they just work. Read only what you need. Constrain with clear rules. Act with structured outputs and safe tools. Explain what happened. Start with the smallest useful feature. Use the patterns that fit your use case. Monitor everything. Improve based on real user behavior, not theoretical performance metrics. The goal isn’t to build impressive AI demos. It’s to ship features that users depend on every day.
I wish the AI coding dream were true. I wish I could make every dumb coding idea I ever had a reality. I wish I could make a fretboard learning app on Monday, a Korean trainer on Wednesday, and a video game on Saturday. I’d release them all. I’d drown the world in a flood of shovelware like the world had never seen. Well, I would — if it worked.
The answer isn't to retreat to 2002 and plain text files for agents to parse. It's to build what actually solves the problem: content infrastructure that AI can read, write, and reason about. You shouldn't build a CMS from scratch with grep and markdown files. You probably shouldn't have to click through forms to update content either. Both of these can be true.
Why buy a CRM solution or a ERM system when “AI” can generate one for you in hours or even minutes? Why sign up for a SaaS platform when Cursor can spit one out just as good in the blink of an eye? But when we look beyond the noise – beyond these sensational flying saucer reports – we see nothing of the sort.
The world doesn't need faster adoption. It requires deeper awareness, better judgment, and leaders who know why they're adopting something new. The leaders I trust most aren't the ones who adopt everything early. They’re the ones who combine curiosity with strategic thinking — who stay informed, evaluate what truly matters, and adopt when the timing is right. The future will not be shaped by those who adopt first, but by those who understand best.
I've tried this on the C# version of ROUND_3 in Emily’s repo, and it was a lot of fun trying to get Claude to do exactly what I want. It felt a bit like playing real golf with a bazooka. I did manage to get one almost-clean round where I didn’t need to edit the code myself much at all, but – by jingo – we went around the houses!
Instead of replacing our current recommendation system, this model became a new signal to boost the right content within Curate — our in-house editorial front-page management and content recommendation system. Thus, there is no change in the way newsrooms work or select content; it simply provides better recommendations for the nonsubscribers we want to convert.
Zed's goal is to make your codebase a living, navigable history of how your software evolved, where discussions with humans and AI agents are durably linked to the code they reference and always up-to-date. It's an evolution beyond version control that incorporates not just the code itself, but also the background information of how and why the code got into a particular state—context that AI agents can query to make more informed edits, understanding the assumptions, constraints, and decisions that shaped the existing code.
So how are you systematically checking that the output your product spits out is good? Smart product/AI leaders have been shouting for months about how AI product managers need to get good at building evals (Lenny Rachitsky, Aakash Gupta, Teresa Torres, Hamel Hussain & Shreya Shankar, to name a few). So I’m shocked to see how many AI product builders have not thought this through, and leave it at a random vibe-check every now and then.
The rise of AI prototyping tools is a reminder of what design truly involves: not just arranging pixels or chasing fidelity but interpreting context, establishing priorities, and creating nuance. The real work of design remains in the judgment, empathy, and intent that only human designers can provide.
Interview synthesis is cognitively challenging. It takes time. And that's time that many teams simply don't have. That's why I'm not surprised to see so many teams turn to generative AI for help with interview synthesis. But this worries me.
Turning a feature on doesn’t guarantee people will use it. Adoption depends on motivation, action, and recovery when things don’t go perfectly. This one banner illustrates three lessons product teams can apply right away.
This all means that I value a strong coupling between skills and individuals, and I don’t see the kind of “Superhuman” AI engineer who can wear every hat having very much purchase beyond the earliest prototyping stage of a project. I think it’s a good idea to walk fast in the opposite direction, diversifying teams and deepening niches.
In all of these cases, AI “works great” for person 1, and is a burden to person 2. Person 2 could: see their time wasted, be forced to explain themselves for words they didn't use, risk being fired or worse. It may have legal consequences.
Worshipers from all the world's major religions are experimenting with chatbots. But Hinduism, with its long tradition of welcoming physical representations of gods and deities, offers a particularly vivid laboratory for this fusion of faith and technology.
In the right context, the icon's meaning is clear. However, the AI Sparkle icon doesn’t always convey granular meaning. Users don't always know what kind of AI they're interacting with in a Google product (ML, LLM, image generator, etc.) or the precise action they'll receive — whether it's newly generated text, AI-powered analysis, or image editing suggestions.
The principle is simple: go slower in the small so you can go faster in the large. Take pair programming. On paper, you cut output in half. In practice, you double shared understanding. You surface assumptions early. You build trust. You improve quality. You raise the baseline of capability across the team.
The sparkles icon has become increasingly prevalent in user interfaces, particularly in association with AI-driven features, but it suffers from ambiguity and lacks a standardized meaning.
Most teams using AI tools responsibly aren’t vibe coding at all—they’re operating in entirely different regions of what I see as the AI development matrix. Understanding where you sit on this matrix determines whether AI becomes a superpower or a disaster.
Right now, most of our tests are “public”. The AI can see them, learn from them, and optimize for them. This works for basic functionality. But it creates a risk. The AI might generate code that passes all your tests but doesn’t actually solve the problem. Like writing an if statement for every number between 1 and 2000 instead of using a proper algorithm.
Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L. The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects.
Suddenly there was no need to pay people like Wilson to write about bat bites and snake nests – AI can do that for free in a few seconds. So what’s a guy to do? “I’m not liking AI but I started to study it every day,” he says. “You have to work with it because it’s not going anywhere and we can’t do anything about it.”
AI coding solutions are bringing together designers and developers to collaborate in a way they never could before. As one engineering leader recently put it: "Our source of truth needs to be our codebase. If designers can work in a tool they're good at and developers work in a tool they're good at, and they’re both working on the same thing, that's when real collaboration happens. It's a handshake, not a handoff."
Past the content and inside the linguistic patterns, you’ll find the creeping uniformity of AI voice. Words like “prowess” and “tapestry,” which are favored by ChatGPT, are creeping into our vocabulary, while words like “bolster,” “unearth,” and “nuance,” words less favored by ChatGPT, have declined in use. Researchers are already documenting shifts in the way we speak and communicate as a result of ChatGPT — and they see this linguistic influence accelerating into something much larger.
If some people start to develop SCAIs and if those AIs convince other people that they can suffer, or that it has a right to not to be switched off, there will come a time when those people will argue that it deserves protection under law as a pressing moral matter. In a world already roiling with polarized arguments over identity and rights, this will add a chaotic new axis of division between those for and against AI rights.
Simply put, at the current trajectory, we’re going to hit a wall, and soon. There just isn’t enough revenue and there never can be enough revenue. The world just doesn’t have the ability to pay for this much AI. It isn’t about making the product better or charging more for the product. There just isn’t enough revenue to cover the current capex spend.
The question isn't whether AI will commoditise your software. It will. Or whether customers will build their own tools. They will. Or whether 100 competitors will emerge in your space. They will. The only question that matters is this: When your customers can build your product in a weekend, why will they still choose you on Monday?
Good engineers apply modern best practices – automated testing, refactoring, small and frequent releases, continuous delivery – and design systems to stay adaptable under change. They pair this with a product mindset, making technical decisions in service of real user and business outcomes. It’s currently being labelled “product engineering” and talked about as the hot new thing, but it’s essentially agile software development as it was originally intended. In the AI-assisted era, these aren’t just nice-to-have skills – they’re the only way to get meaningful benefit. Without them, AI simply helps teams create bad software faster.
The researchers used Gemini's web of connectivity to perform what's known as an indirect prompt injection attack, in which malicious actions are given to an AI bot by someone other than the user. And it worked startlingly well. The promptware attack begins with a calendar appointment containing a description that is actually a set of malicious instructions. The hack happens when the user asks Gemini to summarize their schedule, causing the robot to process the poisoned calendar event.
we are committed to updating our environmental impact reports in the future and participating in discussions around the development of international industry standards. We will advocate for greater transparency across the entire AI value chain and work to help AI adopters make informed decisions about the solutions that best suit their needs. The results will later be available via ADEME’s Base Empreinte database, setting a new standard for future reference for transparency in the AI sector.
Turns out, there are no ethical AI companies. What I found instead was a hierarchy of harm where the question isn’t who’s good — it’s who sucks least. And honestly? It was ridiculously easy to uncover all their transgressions.
Research papers from 14 academic institutions in eight countries – including Japan, South Korea and China – contained hidden prompts directing artificial intelligence tools to give them good reviews, Nikkei has found.
We let Claude manage an automated store in our office as a small business for about a month. We learned a lot from how close it was to success—and the curious ways that it failed—about the plausible, strange, not-too-distant future in which AI models are autonomously running things in the real economy.
What will happen when we blindly apply Big Agile + premature AI tooling to core health services and patient data? Financial services? Social media and the spread of misinformation? Government infrastructure? Military infrastructure? It’s 2025 and software engineering is the backbone of pretty much everything. But for all its prevalence and influence, most people don’t have the technical literacy to even grasp the basics.
Traditional product teams often follow a linear flow: PM defines, designer mocks, engineers build. But that model breaks down when you're building AI-native products. We’re not designing screens for users to click—we’re building systems that interpret intent and act on it. That changes everything. To get it right, everyone needs to be involved early: product, design, frontend, backend, data. Discovery and delivery collapse into the same loop. Feedback is faster. Outcomes are less predictable. The system learns from use, so you need tight, collaborative cycles to learn with it.
Somewhere in the last few months, something fundamental shifted for me with autonomous AI coding agents. They’ve gone from a “hey this is pretty neat” curiosity to something I genuinely can’t imagine working without.
A YouTube channel, “The World News” had taken McGibbon’s copy and turned it into a 14-minute video, with the entire content, approximately 2,000 words, read out by an AI narrator. Childhood photographs supplied by McGibbon to the Daily Mail for one-off use only were also added in as a video montage without permission.
Many expect parallel execution to be about multi-tasking different agents working on the same codebase or subtasks, there is an emerging use case which is parallel exploration: just ask the agent to come up with different variations and you can judge which is the best one or and why not ask a LLM to be a judge too... or at least put in a vote for you?
If you’re making requests on a ChatGPT page and then pasting the resulting (broken) code into your editor, you’re not doing what the AI boosters are doing. No wonder you’re talking past each other.
Många sitter fast i vardaglig brandsläckning, där en enkel fråga kan kräva fyra olika system och ett ägarskap som ingen vill kännas vid. I den miljön kan inte AI skapa värde. Den förstärker bara förvirringen.
The Gmail team built a horseless carriage because they set out to add AI to the email client they already had, rather than ask what an email client would look like if it were designed from the ground up with AI. Their app is a little bit of AI jammed into an interface designed for mundane human labor rather than an interface designed for automating mundane labor.
The most effective driver of AI adoption isn't better models or improved accuracy—it's success stories from peers and respected figures in professional networks. Rather than push AI, encourage communities that will push AI.
Ghibli Day—as we might as well call it, given that OpenAI didn't bother to name the model that set it off—was special. Full of unexpected joy (especially for spouses of AI nerds) and also full of excess. That day, I had a bittersweet insight: humans don’t know how to deal with abundance.
Top 10 risks, vulnerabilities and mitigations for developing and securing generative AI and large language model applications across the development, deployment and management lifecycle.
MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications.
Theoretically, the feature is a useful tool that helps consumers quickly decide what products to buy. But the appearance of these summaries underscores the pitfalls of relying on generative AI: inaccuracy and misleading information.
When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them. But while real looking, this content is not actually the content of the site we are protecting, so the crawler wastes time and resources.
LLM crawlers don't respect robots.txt requirements and include expensive endpoints like git blame, every page of every git log, and every commit in your repository. They do so using random User-Agents from tens of thousands of IP addresses, each one making no more than one HTTP request, trying to blend in with user traffic.
Every major advance that made coding easier—high-level languages, frameworks, cloud computing all led to more software, not less. AI will follow the same pattern: by lowering the barrier to entry, it will flood the world with more software, more systems, and more complexity. And that means we need more engineers, not fewer.
The future belongs to developers who can effectively collaborate with AI, maintaining that careful balance between leveraging its capabilities and engaging their own critical faculties. This is not vibe coding or passive acceptance. This is thoughtful, deliberate software development enhanced by AI.
A community-driven platform showcasing the most innovative indie games, with transparent revenue data and developer insights. Built with Cursor AI and Claude 3.7.
Our new AI assisted review responses feature makes this easier by generating an initial response as a starting point, avoiding the need to create replies from scratch. This article describes some of the approaches we’ve taken as we introduced this exciting new feature into our product, some of the challenges we’ve faced, and some suggestions on things to watch out for as you build generative AI features.
With the latest iOS and iPadOS betas, users can view AI-generated summaries of reviews left by others on App Store listings.
Explore our collection of innovative European AI services that are shaping the future. Find the perfect tools for your projects or discover new AI experiences.
Two conversational AI agents switching from English to sound-level protocol after confirming they are both AI agents
You can use our visual editor. You can code. You can self-host or use our cloud. You will get the job done. Let's go!
You start iterating on your MVP, trying to improve it. But as you add complexity, the system becomes increasingly unpredictable. You’re making changes based on vibes, improving some edge cases while (invisibly) breaking others. This is the stage 90% of companies building AI are in.
Coops nya AI-reklam har mötts av en kritikstorm på sociala medier. Nu tar matjätten åt sig av kritiken.
Companies with relatively young, high-quality codebases benefit the most from generative AI tools, while companies with gnarly, legacy codebases will struggle to adopt them. In other words, the penalty for having a ‘high-debt’ codebase is now larger than ever.
Turn websites into LLM-ready data Power your AI apps with clean data crawled from any website. It's also open-source.
Whenever you use AI, ask yourself: Am I in the position to judge the result? If not, consider skipping AI, or at least run the result by someone who is knowledgable in the area.
Like most wisdom, it's somewhat paradoxical: AI is often most useful where we're already expert enough to spot its mistakes, yet least helpful in the deep work that made us experts in the first place. It works best for tasks we could do ourselves but shouldn't waste time on, yet can actively harm our learning when we use it to skip necessary struggles.
AI-sörjan handlar inte bara om underhållning eller förvirring, den undergräver vår förmåga att hitta, förstå och värdera det som är verkligt och meningsfullt. När våra flöden fylls av nonsens förlorar vi till slut förmågan att skilja det substantiella från det triviala. Vår informationsmiljö, och därigenom vårt kunskapssamhälle, vittrar sönder.
Generally, the guidance is: don’t forget good software engineering practices just because an AI is involved.
In a nutshell, our use of Copilot for generating podcast titles and descriptions did not meet our producers' requirements for nuanced creativity, tone, and editorial judgement.
The great thing about this video scraping technique is that it works with anything that you can see on your screen... and it puts you in total control of what you end up exposing to the AI model.