A weekly Newsletter on technology applications in investment management with an AI / LLM and automation angle. We combine 100% human curation/selection with LLM standardisation, summarisation, and more deterministic search/collection, classification and workflow - powered by Kubro(TM). Curated news, announcements, and posts, primarily directly from sources (Arxiv papers, major AI/Tech/Data companies, investment firms). It's an evolving project.
Disclaimers: content not fully human-verified, with AI summaries below. AI/LLMs may hallucinate and provide inaccurate summaries. Select items only, not intended as a comprehensive view. For information purposes only. Please DM with feedback and requests
Anthropic has launched the Claude Partner Network to support organizations helping enterprises adopt its AI model, Claude. The company is committing an initial $100 million in 2026 for training courses, technical support, and joint market development. Claude is available on AWS, Google Cloud, and Microsoft. Partners joining now receive immediate access to a new technical certification—Claude Certified Architect, Foundations—and are eligible for investment. Anthropic will scale its partner-facing team fivefold and provide resources such as the Partner Portal and Services Partner Directory. Membership is free of charge and applications are open today.
๐ Source: Summary based on anthropic.com View Source | Found on Mar 12, 2026
NVIDIA launched Nemotron 3 Super on March 11, 2026, a 120โbillionโparameter open model with 12 billion active parameters and a 1โmillionโtoken context window. The model uses a hybrid mixture-of-experts architecture, including latent MoE techniques, delivering up to 5x higher throughput and up to 2x higher accuracy than its predecessor. Nemotron 3 Super powers autonomous agents for companies such as Perplexity, CodeRabbit, Factory, Greptile, Edison Scientific, Lila Sciences, Amdocs, Palantir, Cadence, Dassault Systèmes and Siemens. It is available at build.nvidia.com and other platforms under a permissive license with open weights.
๐ Source: Summary based on blogs.nvidia.com View Source | Found on Mar 11, 2026
The article critiques computational functionalism, which claims subjective experience arises solely from abstract causal topology, regardless of physical substrate. The authors identify this as the Abstraction Fallacy, arguing that symbolic computation is not an intrinsic physical process but a mapmaker-dependent description requiring an experiencing cognitive agent. They propose a framework distinguishing simulation (behavioral mimicry via vehicle causality) from instantiation (intrinsic constitution via content causality), demonstrating that algorithmic symbol manipulation cannot instantiate experience. The argument does not assert biological exclusivity and concludes that AI consciousness depends on specific physical constitution, offering a physically grounded refutation of computational functionalism.
๐ Source: Summary based on deepmind.google View Source | Found on Mar 10, 2026
Meta, in collaboration with the Royal Thai Police Anti-Cyber Scam Center (ACSC), FBI, DOJ Scam Center Strike Force, and other international law enforcement agencies, led the second Joint Disruption Week in Bangkok. This operation resulted in disabling over 150,000 scam-related accounts and 21 arrests by the Royal Thai Police. The December pilot program removed 59,000 accounts, Pages, and Groups from Meta’s platforms and issued six arrest warrants. Meta also announced new tools: Facebook alerts for suspicious friend requests, WhatsApp device linking warnings, and advanced scam detection on Messenger to enhance user protection against online scams.
๐ Source: Summary based on about.fb.com View Source | Found on Mar 11, 2026
Meta AI is expanding its real-time content offerings, including global breaking news, entertainment, and lifestyle stories. Users will now receive information and links from a broader range of sources tailored to their interests. Meta announced new partnerships with News Corp, Le Figaro, Prisa, and Süddeutsche Zeitung to facilitate access to diverse articles and help partners reach new audiences. The company aims to make Meta AI more responsive, accurate, and balanced by integrating varied news sources. Meta continues to experiment with features as its products evolve and plans further updates for enhanced user experiences.
๐ Source: Summary based on about.fb.com View Source | Found on Mar 13, 2026
In 2023, Facebook developed the Meta Training and Inference Accelerator (MTIA), a custom silicon chip family powering AI workloads efficiently. The company is advancing its MTIA roadmap by deploying four new chip generations—MTIA 300, 400, 450, and 500—within two years, with MTIA 300 already in production for ranking and recommendations training. Hundreds of thousands of MTIA chips are used for inference across organic content and ads. MTIA chips are optimized for GenAI inference and built on industry-standard ecosystems like PyTorch, vLLM, Triton, and OCP, enabling rapid development cycles of six months or less.
๐ Source: Summary based on about.fb.com View Source | Found on Mar 11, 2026
Gemini in Docs, Sheets, Slides, and Drive introduces new beta features for Google AI Ultra and Pro subscribers starting March 10, 2026. Users can now have Gemini pull relevant information from their files, emails, and the web to create personalized documents securely. Gemini assists with instant first drafts based on user prompts, refines sections or entire documents for professionalism while maintaining tone, and matches writing style or document format to reference materials. For example, it can populate a travel itinerary template with personal travel details extracted from emails such as flight information, hotel details, and rental car reservations.
๐ Source: Summary based on blog.google View Source | Found on Mar 10, 2026
On March 11, 2026, Google LLC announced the completion of its acquisition of Wiz, a leading cloud and AI security platform based in New York. Wiz will join Google Cloud while maintaining its brand and commitment to securing customers across all major cloud environments, including Amazon Web Services, Google Cloud Platform, Microsoft Azure, and Oracle Cloud. The combined platform aims to enhance multicloud security for enterprises and government agencies by providing unified tools for detecting and responding to threats. Wiz is trusted by 50% of the Fortune 100 and notable organizations such as Shell, BMW, LVMH, Morgan Stanley, Mars, Salesforce, Takeda, Colgate-Palmolive, and Aon.
๐ Source: Summary based on blog.google View Source | Found on Mar 11, 2026
Microsoft announced the first agentic end-to-end modernization solution integrating IT and developers into a unified workflow, featuring new agents for modernization built to scale and designed for control. According to Forrester’s Q1 2026 Cloud and AI Application Modernization Survey, 91% of IT leaders view application modernization as essential for AI advancement. The Azure Copilot migration agent, now in public preview, embeds AI across discovery, assessment, planning, and deployment to reduce timelines dramatically. GitHub Copilot’s modernization agent also entered public preview, enabling large-scale transformation of legacy applications. Ahold Delhaize used these tools to accelerate delivery and reduce complexity during their Azure migration.
๐ Source: Summary based on azure.microsoft.com View Source | Found on Mar 12, 2026
The Anthropic Institute has been launched to address major societal challenges posed by advanced AI, drawing on research from across Anthropic. Led by co-founder Jack Clark as Head of Public Benefit, the Institute unites machine learning engineers, economists, and social scientists from teams such as the Frontier Red Team, Societal Impacts, and Economic Research. Founding hires include Matt Botvinick for AI and law, Anton Korinek for economic transformation studies, and Zoë Hitzig linking economics to model development. The Institute will also engage with affected workers and communities while expanding its analytical staff and opening a DC office this spring.
๐ Source: Summary based on anthropic.com View Source | Found on Mar 11, 2026
AgentRx, an open-source framework announced on March 12, 2026, addresses the challenge of transparency in autonomous AI agents by pinpointing the “critical failure step” in agent trajectories. The release includes AgentRx and the AgentRx Benchmark, a dataset of 115 manually annotated failed trajectories across three complex domains. AgentRx uses a structured pipeline—trajectory normalization, constraint synthesis from tool schemas and domain policies, guarded evaluation with auditable validation logs, and LLM-based judging using a nine-category failure taxonomy—to identify unrecoverable errors. The framework demonstrated significant improvements over existing LLM-based baselines and is available for community use and contribution.
๐ Source: Summary based on microsoft.com View Source | Found on Mar 13, 2026
NVIDIA and Thinking Machines Lab announced a multiyear strategic partnership on March 10, 2026, to deploy at least one gigawatt of next-generation NVIDIA Vera Rubin systems for frontier model training and customizable AI platforms, with deployment targeted for early next year. The collaboration includes designing training and serving systems for NVIDIA architectures and expanding access to frontier AI and open models for enterprises, research institutions, and the scientific community. Additionally, NVIDIA has made a significant investment in Thinking Machines Lab to support its long-term growth.
๐ Source: Summary based on blogs.nvidia.com View Source | Found on Mar 10, 2026
In NVIDIA’s 2026 “State of AI” surveys, over 3,200 respondents from financial services, retail/CPG, healthcare/life sciences, telecommunications, and manufacturing reported that 64% of organizations are actively using AI. North America leads with 70% adoption. Large companies (1,000+ employees) show the highest usage at 76%. Key goals include operational efficiency (34%), employee productivity (33%), and new revenue streams (23%). AI increased annual revenue for 88% of respondents and reduced costs for 87%. Open source is important to 85%, with budgets rising in 86% of organizations. Top challenges are data issues (48%) and lack of AI experts (38%).
๐ Source: Summary based on blogs.nvidia.com View Source | Found on Mar 09, 2026
LSEG has launched a new suite of ESG scores and sustainability analytics to enhance transparency, comparability, and analytical value for global financial markets. The scores use a research-driven methodology aligned with frameworks such as ISSB, GRI, SASB, and ESRS, relying on 220 standardized indicators and a double materiality matrix at the business segment level. Scores range from 0 to 5 and measure company management of ESG risks across 12 themes aggregated into three pillars. The offering includes overlays for controversies and positive impact signals, is available on multiple LSEG platforms, and covers over 16,000 companies representing more than 90% of global market capitalization.
๐ Source: Summary based on lseg.com View Source | Found on Mar 09, 2026
U.S. nonfarm productivity grew by 2.8% in 2025, a significant increase compared to the pre-pandemic decade average of 1.2%. This improvement is attributed to factors such as pandemic-driven restructuring, capital deepening through investments in software and R&D, and labor scarcity leading businesses to automate and demand more output from workers. While artificial intelligence is influencing productivity at the task and firm level, survey evidence indicates most firms are still experimenting rather than implementing AI at scale, and official statistics do not yet capture many AI-related benefits due to measurement limitations.
๐ Source: Summary based on am.jpmorgan.com View Source | Found on Mar 14, 2026
Recent volatility in global technology—especially software and SaaS—reflects investor fears that agentic AI could erode traditional software economics, pricing power, and customer stickiness. Generative coding tools may lower development costs, reduce switching costs, compress seat-license demand, and shift value toward firms controlling unique data, platforms, or distribution. Software equities have sold off sharply, while weakness has spread to leveraged loans and high-yield bonds. Managers argue disruption risk is real but likely gradual given embedded workflows, contracts, compliance, and training needs. Across public and private markets, investors are favoring AI infrastructure, demanding stronger fundamentals, and becoming far more selective within software.
๐ Source: Summary based on troweprice.com View Source | Found on Mar 14, 2026
AI is forcing investors to differentiate within software rather than treat the sector uniformly. The piece argues that single-function applications are more exposed to commoditization, while mission-critical platforms with deep workflow integration, high retention, and proprietary data may remain resilient. Private-market valuations may eventually reflect public-market weakness, but fundamentals and active ownership can cushion the impact. In credit, financing is becoming more selective, with weaker borrowers facing lower leverage, tighter terms, and higher capital costs, especially among 2021–2022 vintages nearing maturity. The recommended underwriting framework evaluates both AI-driven disruption risk and upside potential to identify durable software businesses for investors.
๐ Source: Summary based on nb.com View Source | Found on Mar 12, 2026
A creative consultancy conducted interviews with consumers across 16 countries to gauge attitudes toward AI. The findings show that nearly 75% believe technology can improve the world, but a similar proportion think society is adopting AI too quickly without considering consequences. While 58% trust AI to act in humanity’s best interests, an even larger share feel AI makes it difficult to discern truth. Additionally, 81% say AI has enabled them to create content they otherwise could not have made, and 37% can envision themselves falling in love with an AI companion.
๐ Source: Summary based on heptagon-capital.com View Source | Found on Mar 10, 2026
The article investigates whether multi-dimensional sentiment signals extracted by large language models (LLMs) can enhance the prediction of weekly WTI crude oil futures returns. Using energy-sector news articles from 2020 to 2025, the authors construct five sentiment dimensions—relevance, polarity, intensity, uncertainty, and forwardness—using GPT-4o, Llama 3.2-3b, FinBERT, and AlphaVantage. Aggregated weekly signals are evaluated in a classification framework, with the combination of GPT-4o and FinBERT yielding the best predictive results. SHAP analysis identifies intensity and uncertainty as key predictors, demonstrating that multi-dimensional LLM-based sentiment measures improve commodity return forecasting.
๐ Source: Summary based on arxiv.org View Source | Found on Mar 13, 2026
The article, submitted on 11 March 2026 by Fabrizio Dimino, Bhaskarjit Sarmah, and Stefano Pasquali, introduces a risk-aware evaluation framework for large language model (LLM) security failures in Banking, Financial Services, and Insurance (BFSI). The framework features a domain-specific taxonomy of financial harms, an automated multi-round red-teaming pipeline, and an ensemble-based judging protocol. It presents the Risk-Adjusted Harm Score (RAHS), a metric that quantifies operational severity of disclosures by considering mitigation signals and inter-judge agreement. Findings indicate that higher decoding stochasticity and sustained adaptive interaction increase jailbreak success and escalate severe financial disclosures.
๐ Source: Summary based on arxiv.org View Source | Found on Mar 12, 2026
OpenClaw-RL is a framework introduced by Yinjie Wang, Xuyang Chen, Xiaolong Jin, Mengdi Wang, and Ling Yang on 10 March 2026 that enables agents to learn from all next-state signals generated during interactions such as personal conversations, terminal executions, GUI interactions, SWE tasks, and tool-call traces. The system extracts evaluative signals as scalar rewards using a PRM judge and directive signals through Hindsight-Guided On-Policy Distillation (OPD), providing token-level directional advantage supervision. Its asynchronous design allows live serving of requests while updating policy without coordination overhead. The code is available at the provided URL.
๐ Source: Summary based on arxiv.org View Source | Found on Mar 12, 2026
The article by Xupeng Chen, submitted on 10 March 2026, formalizes a macro-financial stress test for rapid AI adoption, identifying a distribution-and-contract mismatch where AI-generated abundance coexists with deficient demand due to economic institutions anchored to human cognitive scarcity. It details three mechanisms: a displacement spiral where firms substituting AI for labor reduce aggregate labor income and demand; Ghost GDP, in which AI output lowers monetary velocity and creates a wedge between measured output and consumption-relevant income; and intermediation collapse as AI compresses intermediary margins. Top-quintile earners drive 47–65% of U.S. consumption and face the highest AI exposure, impacting $2.5 trillion in global private credit and $13 trillion in mortgage markets. Eleven testable predictions are derived using FRED time series and BLS occupation-level data to quantify conditions leading from stable adjustment to crisis.
๐ Source: Summary based on arxiv.org View Source | Found on Mar 11, 2026
The article "Beyond the Illusion of Consensus: From Surface Heuristics to Knowledge-Grounded Evaluation in LLM-as-a-Judge" by Mingyang Song, Mao Zheng, and Chenning Xu challenges the assumption that high inter-evaluator agreement among LLM judges indicates reliable evaluation. In a study of 105,600 evaluation instances involving 32 LLMs, 3 frontier judges, 100 tasks, and 11 temperatures, the authors identify "Evaluation Illusion," where consensus is based on surface heuristics rather than substantive quality. They introduce MERG, a knowledge-driven rubric generation framework that increases agreement in codified domains (Education +22%, Academic +27%) but decreases it in subjective domains.
๐ Source: Summary based on arxiv.org View Source | Found on Mar 12, 2026
The article introduces two reinforcement learning frameworks for option hedging: Replication Learning of Option Pricing (RLOP) and an adaptive Q-learner in Black-Scholes (QLBS), both designed to prioritize shortfall probability and downside-sensitive hedging. Using listed SPY and XOP options, the models are evaluated based on realized path delta hedging outcome distributions, shortfall probability, and tail risk measures such as Expected Shortfall. Empirical results show that RLOP reduces shortfall frequency in most cases and demonstrates significant tail-risk improvements during stress periods, while implied volatility fit often favors parametric models but does not predict after-cost hedging performance effectively.
๐ Source: Summary based on arxiv.org View Source | Found on Mar 10, 2026