Týždeň 2025-25

AI buzz tu máme už pomaly tri roky a výsledok z pohľadu ekonomiky sa nejako nedostavuje. Teda ak nezarátame do výsledku to, že Nvidia predáva rekordný počet hardvéru a stavajú sa nové elektrárne pre pokrytie ich spotreby. @vlkodotnet

Zaujímavosť týždňa: AI to prestáva mať ľahké

Nemáme za sebou informačne bohatý týždeň. Teda pre nás technologicky podkutých ľudí. Ale dozvedeli sme sa, že šéfovia firiem sa sťažujú, že až polovica zamestnancov odmieta AI. Firmy chcú ísť do AI, lebo je to moderné, napriek tomu samotné firmy nedokážu tomu prispôsobiť riadenie. Chýbajúce vzdelávanie potom spôsobí, že zamestnanci nemajú potrebné zručnosti a výsledkom je práve tá nedôvera v AI. Takže, ak chcete AI vo firme, tak to, že poviete na firemnej porade, že od dnes AI používame, veru nestačí.

Nearly half of CEOs say employees are resistant or even hostile to AI
AI adoption faces three barriers: organizational change management, a lack of employee trust and workforce skills gaps, a Kyndryl report shows.

Microsoft CEO Satya Nadella v rozhovore povedal, že ak je AI revolúcia, tak by sme mali vidieť skokové výsledky v raste produktivity a ekonomiky. Toto je dosť zaujímavý pohľad, pretože pri priemyselnej a počítačovej revolúcii sme to jasne videli. Vtedy sa ale vyrábali fyzické produkty, s čím tieto revolúcie pomohli. AI revolúcia je podľa mňa skôr revolúcia v nefyzickom svete. Máme viac virtuálneho obsahu. Možno zamestnávatelia dúfajú, že budeme viac produkovať, ale my zatiaľ máme viac príležitostí ako zabiť nudu.

Microsoft CEO Admits That AI Is Generating Basically No Value
Microsoft CEO Satya Nadella, whose company has invested billions of dollars in ChatGPT maker OpenAI, has had it with the constant hype surrounding AI. During an appearance on podcaster Dwarkesh Patel's show this week, Nadella offered a reality check, arguing that OpenAI's long-established goal of establishing "artificial general intelligence," (AGI) an ill-defined term that roughly denotes the point at which an AI can best humans on an intellectual level, is nonsense. "Us self-claiming some AGI

BIZ okienko

McKinsey vytvorili rovnicu pozornosti. Čo to je? Je to rovnica, ktorá určuje kvalitu vs. kvantitu pozornosti. Môžete dodávať veľa obsahu, ale cez užívateľa to len pretečie ako pol litra šaratice, viac toho zoberie ako dá. To je teda dosť hlúpy príklad. Skúsim inak. Skutočnú hodnotu tvorí obsah, ktorý získa pozornosť užívateľa. Máte síce super TikTok/Reels video, ale v tej záplave obsahu ste dostali iba 30 sekúnd pozornosti užívateľa. Swipe a už sa prehráva ďalšie video. Ale ak ste niekde osobne naživo, napríklad na koncerte, športovej akcii, tak tam už máte stopercentnú pozornosť užívateľa.

The ‘attention equation’: Winning the right battles for consumer attention
McKinsey’s new “attention equation” sheds new light on consumer attention spans and explores how today’s brands can reach consumers more effectively.

Deloitte zverejnil ročenku trendov užívateľov v roku 2025 v UK. Napríklad 47 % ľudí využíva AI, ale v Gen Z je to až 73 %. Mladým AI ide lepšie. Všetci využívajú AI v práci a často bez vedomia zamestnávateľa. Čo možno netušia, že AI v oveľa väčšej miere konzumujú v generovanom obsahu. Trh so zariadeniami je nasýtený a telefóny sa menia len pri poruche. 75 % ľudí má streamingové služby. Zaujímavé je, že až 20 % ľudí posledný rok odišlo zo sociálnych sietí a 50 % ľudí začalo vypínať notifikácie, navyše 18 % ľudí si nastavilo limity na aplikácie. Je to síce UK a oproti nám sú napred, ale možno to nevyzerá s ľudstvom tak zle.

Digital Consumer Trends 2025, UK Edition
Digital Consumer Trends is a multi-country study of how people engage with and purchase digital products. It spans devices, connectivity, media, and emerging technologies, and is now in its sixteenth year of publication. It was previously known as the Mobile Consumer Survey.

Možno sa to do BIZ sekcie nehodí, ale Nintendo vám zablokuje konzolu Switch 2, ak použijete technológiu na hardvérovú emuláciu ich herných kartridžov (normálne tomu neviem dať iný slovenský názov). Nezablokujú účet, ale prístup konzoly na ich služby, a ak následne urobíte reset, tak už sa do nej nikdy nebudete môcť prihlásiť. Takže Switch sa stane takým drahým a nefunkčným kúskom domácnosti.

Nintendo is banning online services on Switch 2 systems that use the Mig cartridge
Your console, but not your account, will be banned.

AI okienko

Mám celkom užitočné AI odkazy dnes. Najskôr taký jednoduchý návod, ako si vylepšiť GitHub Copilota pomocou extra inštrukcií. Ako potom vylepšiť váš prompt a ktorý AI model je vhodný na konkrétny typ úlohy.

Essential custom instructions for GitHub Copilot
Prompt engineering - or should I say “Prompt Negotiation” - is (currently) an important part of productivity with AI. Your success working with tools like GitHub Copilot is gonna be directly related to how well you prompt it. GitHub Copilot does try to handle as much of the prompt engineering for you behind the scenes as it can, but just like all your passive agressive relationships, you need to tell it what you want instead of expecting it to just know. It’s not a mind reader. You need to give it Custom Instructions.Custom InstructionsCustom Instructions are exactly that - custom prompts that get sent to the model with every request.You can define these at a project level or at an editor level. Often these are demonstrated as specific project level instructions such as a command like prefer fetch over axios. These project specifc custom instructions are quite powerful and can be checked in and shared. You can create a custom instructions file for your project by adding a .github/gopilot-instructions.md file, or by adding a file attribute to the instructions settings. You can have multiple of these file attributes which means you can have multiple different instructions files.These very specific types of project instructions are super helpful with getting better help from your AI pair progammer. But it’s less obvious what you can/should do with the more global, editor level instructions.I’ve been working heavily with GitHub Copilot for the past several months, and I’ve added 4 custom instructions that I’ve found greatly increase my productivity with GitHub Copilot. These are more generic prompt engineering “best practices” will help you avoid pitfalls and get better code from the LLM.You can add the “global instructions” that I’m about to give you by going to your User Settings (JSON) file and adding keys like so…“github.copilot.chat.codeGeneration.instructions”: [ { “text”: “this is an example of a custom instruction” }]Ok - let’s do it.Ask for missing context Avoid making assumptions. If you need additional context to accurately answer the user, ask the user for the missing information. Be specific about which context you need.The achilles heal of LLM’s is that they are designed to provide a response no matter what. It’s the paperclip problem applied to LLM’s. If you design a system to provide an answer, it is going to do that at all costs. This is why we get hallucinations.If I said to you, “make a GET request to the api”, you would likely ask me several follow-up questions so that you could actually complete that task in a way that works. An LLM will just write a random GET request because it does not actually care if the code works or not.Copilot tries to mitigate a lot of this for you with its sytem prompt, but you can reduce hallucinations further by instructing the AI to ask you for clarification if it needs more context.This isn’t bullet proof. LLM’s seem so hell bent on answering you at all costs that often I find this instruction is just ignored. But on the occasions that it works, it’s a nice surprise.Provide file names Always provide the name of the file in your response so the user knows where the code goes.I’ve noticed that Copilot will sometimes give me back several blocks of code, but won’t mention where they belong. I then have to figure out which files it is referring to which takes an extra cycle. This prompt forces the LLM to always provide the file name.If you are working in theoretical space where you aren’t talking about specific project files, Copilot will provide made up file names for the code snippets. This is fine because it’s a detail that doesn’t matter in that context.Write modular code Always break code up into modules and components so that it can be easily reused across the project.I tend to write a lot of frontend code, which is all about components these days. I’ve found that Copilot will often try and do too much in a single file when it should ideally break out UI code into separate components. AI’s are fairly good at organization, so if you ask it to break things out into components, Copilot will do an impressive job of suggesting the right places to decouple. I’ve found this prompt works quite well in non-UI code as well. If I ask for a change in an API, this prompt helps Copilot break out services, repositories, etc.Code quality incentives All code you write MUST be fully optimized. ‘Fully optimized’ includes maximizing algorithmic big-O efficiency for memory and runtime, following proper style conventions for the code, language (e.g. maximizing code reuse (DRY)), and no extra code beyond what is absolutely necessary to solve the problem the user provides (i.e. no technical debt). If the code is not fully optimized, you will be fined $100.This prompt comes almost verbatim from Max Wolf’s “Can LLM’s write better code”. In this post, Max decribes trying to get LLM’s to write better by code iterating on the same piece of code with the prompt, “write better code”. He finds that the above prompt combined with Chain of Thought produces very nice results - specifically when used with Claude. He uses the very last line to incentivize the LLM to improve it’s answers in iteration. In other words, if the LLM returns a bad answer, your next response should inform the LLM that it has been fined. In theory, this makes the LLM write better code because it has an incentive to do so when it otherwise might keep on returning bogus answers.Chain of thought is when you tell the model to “slow down and go one step at a time”. You don’t need to tell Copilot to do this because that is already part of the system prompt.The model matters more than the promptWhile these prompts will help you get better results from Copilot, in my experience the most effective thing you can do is pick the right model for the job. As of today, I see it like this…GPT-4o: Specific tasks that don’t require much “creativity”. Use 4o when you know exactly what code you need and it’s just faster if the LLM writes it.Claude: Harder problems and solutions requiring creative thinking. This would be when you aren’t sure how something should be implemented, it requires multiple changes in multiple files, etc. Claude is also exponentially better at helping with design tasks than GPT-4o seems to be in my experienceo1: Implementation plans, brainstorming and docs writing. My friend Martin Woodward finds that o1 is particularly good with tricky bugs and performance optimizations.Gemini: Not widely available yet. I’m using this one more and watching it closely to see where it shines. I have high hopes.Living instructionsI hope these instructions are helpful for you. I consider them “living” and I hope to keep this list updated as I add more or change these as Copilot itself evolves, new models come along and our general knowledge of prompting improves.

Claude Code nemusíte využívať len na generovanie kódu. Ale aj na rôzne iné účely. Napríklad si vygenerovať prezentáciu o štruktúre projektu. Alebo vám pomôže vytvoriť a následne vám prečíta novinky zmien v repozitári za posledný týždeň. Článok vás môže inšpirovať k vytvoreniu vlastného nástroja.

blog - kade@localhost:~$
blog - kade killary

Nanonets-OCR je celkom malý OCR model na konverziu obsahu textu z obrázku do Markdownu.

nanonets/Nanonets-OCR-s · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Linky na záver

Nxtscape je prehliadač, ktorý viete pomocou agentov automatizovať. A navyše je open-source.

Nxtscape
Nxtscape is a browser that is built for productivity and privacy.

Harper je open-source alternatíva Grammarly. Navyše sa dokáže skompilovať do WebAssembly, a teda bežať priamo v prehliadači ako natívny add-in.

GitHub - Automattic/harper: Offline, privacy-first grammar checker. Fast, open-source, Rust-powered
Offline, privacy-first grammar checker. Fast, open-source, Rust-powered - Automattic/harper

Fossify je zbierka open-source aplikácií pre Android, ktoré neposielajú dáta Google, teda chránia vaše súkromie.

Fossify
A suite of open-source, ad-free apps with customizable colors - Fossify

Chcete sa pozrieť do hĺbky, ako fungujú Google TPU čipy? Akú fyzickú architektúru okolo toho postavili? Je to tak špecifická téma, že ani ja som obsahu článku úplne neporozumel.

TPU Deep Dive

Predstavte si, že sa ráno zobudíte. Ak náhodou patríte medzi to malé percento populácie, čo si pamätá svoje sny, tak práve pre vás vznikol Dream Recorder. Potom mu iba poviete, čo ste v sne zažili a on okrem toho, že si to zapamätá, tak vám k tomu vygeneruje aj AI generovanú vizualizáciu vo veľmi nízkom rozlíšení.

DREAM RECORDER — DREAM OUT LOUD
Dream Recorder is the magical bedside device catches your nightly visions and plays them back as vivid, cinematic reels.

Vizuálna bodka na záver

Samozrejme, že Lego tento set nevyrobilo, ale keby áno, má čestné miesto na mojom pracovnom stole.