Sign in to view Bex’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Bex’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Tashkent Region, Uzbekistan
Sign in to view Bex’s full profile
Bex can introduce you to 8 people at Firecrawl
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
11K followers
500+ connections
Sign in to view Bex’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Bex
Bex can introduce you to 8 people at Firecrawl
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Bex
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Bex’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Activity
11K followers
-
Bex Tuychiev shared thisGot this email from Sentry. A great positive sign for the future of agentic coding. And I don't even have any users yet.
-
Bex Tuychiev posted thisClaude Code made me join the $100/year Apple Developer program. Now I can publish apps to App Store. My passion to build things has never been higher.
-
Bex Tuychiev shared thisClaude Code prompt suggestions are getting some personality
-
-
Bex Tuychiev shared thisI always get scared before asking this question from Claude Code
-
Bex Tuychiev posted thisClaude Code Max plan ($100) is now lasting only two hours of work across two terminal tabs with Opus 4.5. Should I upgrade to the $200 version?
-
Bex Tuychiev shared thisMe a few months ago: “Who on earth pays $200/month for ChatGPT Pro?” Me this month: quietly pays $60+ for Cursor without blinking Turns out, if a tool makes you faster, better, and more billable… It’s not an excess. It’s leverage. Maybe those “idiots” just understood that before I did. #chatgpt #cursor
-
Bex Tuychiev posted thisClaude 3.7 Sonnet is the most impressed I've been with a language model. Been using it non-stop in Cursor for the past few days and don't see myself choosing anything else anytime soon. It feels like it was designed with "just me" in mind. Before this, it was Sonnet 3.5.
-
Bex Tuychiev posted thisAs an ML engineer and content creator, I would love to rewrite the LangChain documentation from scratch—I mean, revamp every aspect. I must be one of the few who loves writing docs.
-
Bex Tuychiev liked thisBex Tuychiev liked thisEveryone’s talking about Claude Code leaks right now… But instead of rumors, I'm about to show you their entire internal roadmap — and honestly, it’s going to shock you. Someone analysed the codebase and dropped all the plan, env variables and architecture on ccleaks.com All of this and way more is coming to Claude Code soon. Here’s what’s actually shipping: BUDDY → Every user gets a unique virtual pet that appears next to their terminal prompt. Your pet's species, rarity, and personality are generated from your account ID — so yours is one-of-a-kind. KAIROS → An always-on mode where Claude remembers everything across sessions. It keeps daily logs of what you talked about and "dreams" overnight — automatically organizing your memories into useful notes while you sleep. Coordinator Mode → Claude becomes a manager. It breaks your task into pieces, assigns each to a separate worker agent running in parallel, then combines their results. ULTRAPLAN → For complex tasks, Claude spins up a separate cloud instance that explores and plans for up to 30 minutes. You review and approve the plan in your browser before it runs. Bridge → Run Claude on your local machine but control it from your phone or from claude in the browser. Permissions, model changes, and tool approvals all sync in real time. Daemon Mode → Run Claude sessions in the background like system services. List them, check their logs, reattach to them, or kill them — like docker ps for your AI agents. Plus 26 secret slash commands, 32 hidden build flags, and over 120 settings they never told us about. Anthropic has been cooking in total silence… and this leak just handed us the full menu early. Which one are you most hyped to try first? #ClaudeCode #AILeaks #Anthropic
-
Bex Tuychiev liked thisBex Tuychiev liked thisI've been using Claude Code heavily across multiple projects and kept hitting the same problem: I'd close a session and come back the next day with no idea where I left off. What did I build? How far did I get? Which project was eating the most tokens? Claude Code stores everything locally as JSONL files. All the data is there, but no way to explore it. So I built Strata to solve the problem for myself. It's a 3D topographic dashboard that scans all your local Claude Code sessions and turns them into an interactive terrain map. Each project becomes an island. Each session becomes a peak. The taller the mountain, the more tokens used. But the terrain is just the entry point. You can: → Replay any conversation with character-by-character typing animation — watch exactly how a session unfolded → Open a Gantt chart of every tool call to see what Claude did in parallel → Explore subagent execution trees — some of my research sessions spawned 20+ agents running concurrently → Drill into any project to see all sessions with full token breakdowns → Resume any past session with one click — Terminal opens with claude --resume and you're back where you left off No cloud, no accounts. Fully local and open source. Website: https://lnkd.in/esDYF8ZQ Github: https://lnkd.in/eQb6YYjs
-
Bex Tuychiev liked thisBex Tuychiev liked thisrun this weekly congrats… you just made your claude code 10x better every week copy paste this one: ### Start ### /scheduler:schedule-add every Saturday at 10am, scrape all my Claude Code sessions from ~/.claude/projects/ and ~/.claude/sessions/ for each session JSONL file extract the user messages to understand what i asked Claude to do group everything into 4 categories: 1. SKILLS - repeatable creative tasks i trigger manually 2. AGENTS - autonomous research or action workflows 3. SCHEDULED TASKS - recurring things to automate 4. CLAUDE.MD - repeated preferences or context to bake in for each item include: - one line description - session ID - frequency count - recommended category sort by frequency send report to Slack channel XXXXX via slack MCP end with "reply with session ID to see full context" #### end #### ♻️repost if you think someone in your network needs it
-
Bex Tuychiev liked thisBex Tuychiev liked thisWe just crossed 100k GitHub stars! Thank you to everyone building with us 🔥
-
Bex Tuychiev liked thisBex Tuychiev liked thisIt's done! All chapters of Build A Reasoning Model (From Scratch) are now available in early access. This book picks up where Build a Large Language Model (From Scratch) left off. Instead of focusing on pretraining a base model, it starts with a pretrained Qwen3 base model and then builds reasoning capabilities step by step in code: - evaluating reasoning models - inference-time scaling - self-refinement - reinforcement learning - distillation There is a lot of discussion around "reasoning" in LLMs, and I think the best way to understand what it really means in the context of LLMs is to implement one from scratch. So rather than only describing the methods at a high level, the book walks through them hands-on, ranging from how to evaluate reasoning outputs, how inference-time scaling changes behavior, how RL-based training works in practice, and how distillation can transfer reasoning traces into smaller models. One thing that was especially important to me is that the code stays readable and educational. The goal was not to build the biggest or most optimized possible system, but to create something small enough to follow and still realistic enough to show how modern reasoning-model workflows are put together. I also tried to keep the main chapter code accessible. Most examples are designed around consumer hardware, and the repository includes the notebooks, scripts, and supporting materials so readers can experiment with the ideas directly. The book is currently in production and should hopefully be out in the next 1-2 months, including full-color print and syntax-highlighted code listings, yay. There is also a preorder up on Amazon now.
-
Bex Tuychiev liked thisBex Tuychiev liked thisI’ve been using Claude Code for almost a year now and most of the time I did it wrong. Defining the right anatomy of the .claude/ folder has defined a before and an after on the quality of my side projects. Most new users skip the setup. They open Claude Code and start prompting raw. No structure. No rules. No memory. That's the mistake I did. The .claude folder is Claude's operating system for your project. Get it right and Claude stops guessing. Get it wrong and you spend half your time correcting it. Here's the anatomy that works: 👉 CLAUDE.md → Claude's instruction manual. Build commands, architecture decisions, conventions, gotchas. Keep it under 200 lines. This is your highest-leverage file. 👉 rules/ → When CLAUDE.md gets crowded, split by concern. code-style.md, testing.md, api-conventions.md. Scope rules to specific paths with YAML frontmatter so they only load when relevant. 👉 commands/ → Repeatable workflows as slash commands. Code review, issue fixing, deploy checks. They run shell commands and inject real output into the prompt. 👉 skills/ → Like commands but Claude triggers them automatically when the task matches. They're packages, not single files. 👉 agents/ → Isolated subagent personas with their own tools and model preferences. A code reviewer that only reads. A security auditor scoped to grep. 👉 settings.json → Permission control. What Claude can run freely, what it must ask about, what's blocked entirely. The part most people miss: there are two .claude folders. One in your project (committed, shared with the team) and one at ~/.claude/ (personal, global across all repos). The .claude folder is infrastructure. Treat it like one. Full guide in the article below in the comments 👇 Visualization by the greatest Brij kishore Pandey #ai #claude
-
Bex Tuychiev liked thisBex Tuychiev liked thisA book changed the way I teach. Last year I picked up Sebastian Raschka, PhD's "Build a Large Language Model (From Scratch)" and couldn't put it down. Not because it told me things I didn't know - I'd been running LLM inference in production for a while - but because of how it taught them. The from-scratch, code-first approach. No magic hidden behind library calls. Every building block laid bare in PyTorch. It made me rethink how I explain things. So I built a full course around that same philosophy - teaching LLMs from the ground up, the Feynman way. That course is almost at 150,000 views. When I reached out to Sebastian to share what his work had sparked, he took the time to look at the course and respond thoughtfully. 200K+ followers, mass-cited researcher, bestselling author - and still that generous with his time. That tells you everything about the person behind the book. Sebastian, thank you. Your book didn't just teach LLMs - it taught me how to teach better. That's the rarest kind of technical writing. If you work with ML and haven't read it yet, fix that.
-
Bex Tuychiev liked thisBex Tuychiev liked thisThere's a hidden setting in Claude Code right now that most people haven't noticed. Type /memory. You'll see two toggles. Auto-memory: on. And right below it — Auto-dream: off · never. I pulled up the leaked system prompt on GitHub. Auto-dream is a subagent that runs "a dream" — a reflective pass over memory files. Reads an index, skims transcripts, merges facts into topic files, prunes contradictions. The prompt opens with: "You are performing a dream — synthesize what you've learned recently into durable, well-organized memories." We open-sourced this exact architecture at Dria six months ago. Same markdown-based memory, same index-to-detail file structure, same background subagent for retrieval, update, and clarification. Two teams converging independently — because once you decide LLMs need persistent memory, there really aren't that many ways to build it well. The difference that matters: Anthropic prompts a frontier model to do consolidation. We trained a 4B model with RL specifically for it. That 4B specialist scores 75% on our memory benchmark — beating GPT-5 (63%), Claude Opus 4.1 (55%), and Gemini 2.5 Pro (64%). The 4-bit quantized version is 2GB, runs on a laptop, and works as a local MCP sidecar with any LLM. No API dependency. Memory as infrastructure, not a product feature locked to one ecosystem. This is just declarative memory — facts and relationships. Next: procedural memory, agents that learn and reuse skills across sessions. That's where the compounding starts. Model: https://lnkd.in/eK6mb9Th Technical report: https://lnkd.in/duTxKAyV MCP server: https://lnkd.in/eiUbamC8
-
Bex Tuychiev liked thisClaude Code just shipped something called Auto Dream. Here’s the problem it solves. Auto Memory was added a few months back. The agent writes notes to itself. Tracks your corrections. Learns your preferences across sessions. Good idea. Terrible outcome. By session 20 the memory file is bloated. Contradictions everywhere. Stale context. The agent is actually performing worse than when it started. Auto Dream fixes this by doing what your brain does at 3am. → Scans all past session transcripts — up to 900+ → Kills anything stale or contradictory → Consolidates what’s still useful into indexed files → Replaces vague timestamps like “today” with actual dates Runs in the background. Triggers only after 24 hours and 5 sessions. Lock file so two instances can’t step on each other. It’s modelled on REM sleep. Literally. Sub-agent teams that mirror org structures. Memory systems built around human biology.
Experience & Education
-
Firecrawl
** ********
-
************
*******
-
********
******* ******** ******* *******
-
*********** ************* ********** ** ********
*** ****** ******** *********** ******* undefined
-
View Bex’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View Bex’s full profile
-
See who you know in common
-
Get introduced
-
Contact Bex directly
Other similar profiles
Explore more posts
-
Syed Sajjad Ali Naqvi
Datalytics AI • 3K followers
📢 Introducing My Research Project: Cluster Ruler – Real-Time Adaptive Urdu Sign Language Clustering 🚀 Proud to share my final year research project: Cluster Ruler — a system designed to recognize and cluster Urdu/Pakistan Sign Language gestures in real time, without the need for manually labeled data! ❌📝 ✅ Key Highlights: • 🔍 No frame-by-frame labeling required — clustering is fully unsupervised • 🎥 Real-time gesture detection using webcam and MediaPipe • 🧠 Extracts temporal hand landmarks and features for adaptive clustering • 🔄 Built using a Simulated SVTFormer model for high-dimensional feature encoding • 📂 Dynamically creates, renames, and merges gesture clusters on the fly • 💻 Powered by Python (Flask), MediaPipe, HTML/JS — runs entirely offline This project contributes to making sign language technology more scalable and accessible by removing the bottleneck of annotation. Aimed at bridging communication gaps and empowering accessibility solutions. If you're into machine learning, computer vision, HCI, or accessibility innovation — let’s connect! #Research #ClusterRuler #SignLanguageRecognition #UnsupervisedLearning #MediaPipe #DeepLearning #FinalYearProject #ComputerVision #AdaptiveAI #GestureRecognition #Flask #PakistanSignLanguage #ML #HCI
13
1 Comment -
Jonathan White
The University of the West… • 859 followers
So, Google went ahead and dropped a massive guide on how to prompt Large Language Models for FREE. Dives deep into many prompting techniques, best practices, examples and how to get predictable + reliable output (link below ↓) https://buff.ly/ozWxoLe
18
4 Comments -
Abhijith Neil Abraham
Vitalops • 6K followers
Sometimes you'll need better tools from top companies, even Google Years ago, when I was almost finishing up my undergrad during 2020, Fariz Rahman and me built a tool called TableQA, as we needed a tool that can query natural language on tabular data. Back then, our only alternative was to use Google's Tapas, which had issues with context length (will support only upto a limited number of rows, had memory issues etc) and we wanted a solution that could process infinite rows, connect to different SQL database sources, etc. And we built a tool, called TableQA, which performs natural language to SQL conversion and then applying the SQL on top of your data. I recieved so many emails, feature requests, and even several job offers just through this one tool. Fast forward to 2025... The emails still didn't stop. People still wanted a TableQA alternative, only difference is now we have LLMs, so they wanted LLM assisted query on natural language. This time I was already building my own company. So we converted such leads into our paying customers! At Datatune AI, we're commited to solving your issues with tabular data, and you get one of the best minds in the world at solving your problems!
32
-
Saroswat Roy , M.Sc. AI - NLP
City of London Corporation • 9K followers
🧠 𝗛𝗼𝘄 𝗱𝗼 𝗟𝗟𝗠𝘀 𝗹𝗲𝗮𝗿𝗻 𝘁𝗼 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲 𝗰𝗼𝗵𝗲𝗿𝗲𝗻𝘁 𝘁𝗲𝘅𝘁 ? The key lies in causal (or masked) attention -> a mechanism that ensures the model only attends to past and present tokens, never the future. 🔍 During training, future words are masked out in the attention matrix. This enforces unidirectional context, for example, while predicting “you” in “Practice makes you better everyday”, the model only sees “Practice” and “makes”. It has no access to “better” or “everyday” yet. This constraint forces the model to learn language autoregressively, one token at a time ..... mimicking the way humans speak and write. This is how LLMs generate fluent, context-aware outputs without cheating. #AI #LLM #Transformers #CausalAttention #DeepLearning #NLP #AutoregressiveModels
14
-
Priyam Kakati
XO Health Inc. • 16K followers
Your First Client Will Teach You More About Your Work More Than Any Book or Course Ever Could! During my undergrad(Around 7 years ago) , I started freelancing on Fiverr(left freelancing about 4 years ago) as a data scientist. I had the skills Python, time series forecasting, and a stack of Kaggle projects but nothing prepared me for my first international client: X(Let's call it X), a boutique skincare brand in Barcelona. X had a problem: "We’re overstocking products that don’t sell and running out of the ones that do. Can you predict demand so we stop wasting money? I dove in, expecting a straightforward forecasting task. But their data was messy seasonal spikes, inconsistent sales logs, and no clear patterns. My first attempt? A classic ARIMA model. The results were… okay. But when I shared the forecast with the client, their response was: "This tells us what might happen, but not why or how to prepare for it." That’s when I realized: Prediction isn’t just about numbers it’s about action. I went deeper: Combined sales data with Google Trends to track interest in ingredients like "hyaluronic acid" or "vitamin C." Layered in weather data (yes, humidity affects skincare sales!). Built a hybrid model (XGBoost + Prophet) to predict demand and flag the "why" behind spikes. The result? X reduced overstock by 45% and cut storage costs in half. Even better, they started pre-launching products based on predicted trends - like a sunscreen line before a heatwave which boosted their revenue by 18% in six months. That project taught me: - A good prediction isn’t just accurate it’s actionable. - Clients don’t want models; they want decisions. Your first real-world project will teach you more than 100 tutorials.
7
-
Sachith Gunasekara
OKRA.ai • 2K followers
Excited to share key insights from our latest research on Large Language Model #reasoning, co-authored with Yasiru Ratnayake! Our work, "𝗘𝗳𝗳𝗲𝗰𝘁𝘀 𝗼𝗳 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗼𝗻 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗶𝗻 𝗶𝗻𝘀𝘁𝗮𝗻𝗰𝗲-𝗹𝗲𝘃𝗲𝗹 𝗦𝗲𝗹𝗳-𝗗𝗶𝘀𝗰𝗼𝘃𝗲𝗿", challenges the prevailing reliance on rigid, structured outputs for complex problem-solving. The quest for predictable #LLM reasoning often leads to structured formats like JSON. However, we introduced iSelf-Discover, an instance-level adaptation of the Self-Discover framework, to directly compare dynamically generated structured JSON reasoning with its unstructured, natural language counterpart. 𝗞𝗲𝘆 𝗙𝗶𝗻𝗱𝗶𝗻𝗴𝘀 𝗧𝗵𝗮𝘁 𝗦𝘂𝗿𝗽𝗿𝗶𝘀𝗲𝗱 𝗨𝘀: 1. 𝗨𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲𝗱 𝗟𝗲𝗮𝗱𝘀 𝘁𝗵𝗲 𝗪𝗮𝘆: Across diverse benchmarks, particularly the challenging MATH benchmark, unstructured plans achieved significant relative performance improvements of up to 𝟭𝟴.𝟵𝟬% over structured approaches. 2. 𝗭𝗲𝗿𝗼-𝗦𝗵𝗼𝘁 𝗣𝗼𝘄𝗲𝗿: Even more compelling, zero-shot unstructured iSelf-Discover variants outperformed their five-shot structured counterparts. This highlights the inherent strength of allowing LLMs more natural, free-form expression. 3. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗶𝘀 𝗞𝗶𝗻𝗴 𝗳𝗼𝗿 𝗚𝗿𝗮𝗻𝘂𝗹𝗮𝗿𝗶𝘁𝘆: The optimal granularity of plan generation (instance vs. task level) isn't universal. It's context-dependent, varying with benchmark characteristics and the specific language model employed. 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗶𝗦𝗘𝗟𝗙-𝗗𝗜𝗦𝗖𝗢𝗩𝗘𝗥? Our framework generates reasoning plans for each individual task instance, rather than a single task-level plan as in Self-Discover. It supports both structured (dynamic JSON) and unstructured (natural language) reasoning. The core operations are: • 𝗦𝗘𝗟𝗘𝗖𝗧: Identifying relevant reasoning modules. • 𝗔𝗗𝗔𝗣𝗧: Tailoring these modules to the specific task. • 𝗥𝗘𝗔𝗦𝗢𝗡: Generating and then executing the plan. 𝗜𝗺𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗙𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀: These findings are not just theoretical; they have direct implications for designing advanced LLM agents. In fact, we're excited to be submitting our iSelf-Discover paper to the 𝗔𝗴𝗲𝗻𝘁𝗫 competition, a component of the "Advanced Large Language Model Agents" MOOC (https://lnkd.in/gj9KkfRp). Beyond AgentX, our work shows that while structured outputs are predictable, they can limit LLM reasoning on complex tasks. iSelf-Discover’s instance-level unstructured approach unlocks substantial gains, calling for a rethink of the structure-flexibility balance in AI. Future capable LLM agents may need more unstructured thought to leverage their full linguistic strengths. Our Pre-print will be available soon... We believe iSelf-Discover opens a promising avenue for exploring LLM reasoning capabilities. Looking forward to discussions and further explorations! #AI #ArtificialIntelligence #LargeLanguageModels #AIResearch #NLP #AIAgents #AgenticAI #LLMAgentsMOOC
61
2 Comments -
Abdul Qadeer
MobiTising • 2K followers
A quiet warning for every nation and a loud one for Pakistan. Recent reports about foreign AI models such as Claude being used in sensitive military and strategic contexts linked to Venezuela and Iran should make us pause. AI is no longer just about chatbots, automation, or productivity hacks. It has crossed into the domain of national security, intelligence, and strategic decisionmaking. Here’s the uncomfortable truth: When a country relies on foreign AI models, it also relies on foreign priorities, foreign policies, and foreign control points whether visible or not. For Pakistan, this is an alarming signal. If we continue to consume AI without building it, we risk: • Strategic dependency • Loss of technological sovereignty • Limited control over critical systems • Long-term national security vulnerabilities History shows that nations which fail to own critical technologies eventually pay the price economically and strategically. AI sovereignty is no longer optional. It is not a “future goal.” It is a present necessity. The countries that build their own AI today will shape the balance of power tomorrow. The rest will simply adapt to decisions made elsewhere. The question is no longer if Pakistan should invest in indigenous AI, it’s how fast we can afford not to.
16
4 Comments -
Ezz Abuzaid
January • 2K followers
My understanding of AI agents so far: AI agents are dynamic loops of observe-think-act, where the model interprets context, plans next steps, invokes tools (function), updates its state, and repeats until the goal’s met. In other words, you prompt the LLM to decide which action or tool call to take at each step based on the evolving conversation or data. Good example is a sales agent that knows when to handoff the call to a human. ChatGPT when it understands that it should use search or/and fetch content from company knowledge base, combine both and show some visualization.
34
6 Comments -
Shawal Khan
QEC-IMSciences • 2K followers
ML Lab | Day 8 (18th Feb 2026) - Data Collection with Web Scraping In last ML lab, we shifted focus from data analysis to data collection, an essential step before any ML model is built. We explored web scraping using: requests (GET & POST methods) BeautifulSoup for parsing HTML Extracting data using .find() and .find_all() Targeting specific elements like tables and rows Key takeaway: Before cleaning, visualising, or modelling data, you need to know how to collect it. Web scraping helps transform web content into structured datasets ready for analysis. Slow pace. #MachineLearning #WebScraping #DataCollection #Python #BeautifulSoup #MLLab #ComputerScience
25
2 Comments -
Soham Nandi
State Street • 1K followers
🌟𝐒𝐭𝐫𝐞𝐚𝐤 𝐃𝐚𝐲 465 💻 𝐌𝐚𝐲 𝐋𝐞𝐞𝐭𝐂𝐨𝐝𝐢𝐧𝐠 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞 - 𝐃𝐚𝐲 10 🧠 Problem of the Day: 2918. Minimum Equal Sum of Two Arrays After Replacing Zeros (https://lnkd.in/gshCWMTz) 🔍 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡: - First calculate the total sum of each array while counting the number of zeros. - For every zero encountered, add 1 to the total sum, assuming the minimal possible replacement. - After computing the adjusted sums, check if the higher sum can be reached by the other array. - If the array with the lower total has no zeros left to increase its sum, return -1 as equalization is impossible. - Otherwise, return the maximum of the two adjusted totals as the minimum equal sum achievable. 📚 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧 𝐋𝐢𝐧𝐤: https://lnkd.in/gDM-hpMi #leetcode #leetcodechallenge #dsa #leetcodestreak #potd #JobSeeker #DataStructures #Algorithms
17
1 Comment
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content