A weekly Newsletter on technology applications in investment management with an AI / LLM and automation angle. Curated news, announcements, and posts, primarily directly from sources (Arxiv papers, major AI/Tech/Data companies, investment firms). We apply some of the ROI's Kubro(TM) Engine tools at the backend for production, yet with a human in the loop (for now). It's a start, and it will evolve on a weekly basis.
Disclaimers: content not fully human-verified, with AI summaries below. AI/LLMs may hallucinate and provide inaccurate summaries. Select items only, not intended as a comprehensive view. For information purposes only. Please DM with feedback and requests
Claude Code Security, now available in a limited research preview, scans codebases for security vulnerabilities and suggests targeted software patches for human review. Released to Enterprise and Team customers, with expedited access for open-source maintainers, it uses AI to detect complex vulnerabilities missed by rule-based tools. Findings undergo multi-stage verification and are assigned severity and confidence ratings before appearing in the dashboard for developer approval. Built on over a year of research—including competitive events and collaboration with Pacific Northwest National Laboratory—Claude Opus 4.6 recently found over 500 vulnerabilities in production open-source codebases.
🔗 Source: Summary based on anthropic.com View Source | Found on Feb 20, 2026
Gemini 3.1 Pro is being released in preview as of February 19, 2026, following the launch of Gemini 3 Pro in November. The update includes higher limits for users with Google AI Pro and Ultra plans and is available in the Gemini app. Additionally, 3.1 Pro is now accessible on NotebookLM exclusively for Pro and Ultra users. Developers and enterprises can access Gemini 3.1 Pro in preview through the Gemini API via AI Studio, Antigravity, Vertex AI, Gemini Enterprise, Gemini CLI, and Android Studio.
🔗 Source: Summary based on blog.google View Source | Found on Feb 19, 2026
Claude Sonnet 4.6 is a full upgrade to the Sonnet model, featuring improved coding, computer use, long-context reasoning, agent planning, knowledge work, and design. It offers a 1M token context window in beta and is now the default model for Free and Pro plans on claude.ai and Claude Cowork, with pricing unchanged from Sonnet 4.5 at $3/$15 per million tokens. Safety evaluations show it is as safe or safer than previous models. Early testing found users preferred Sonnet 4.6 over Sonnet 4.5 about 70% of the time and over Opus 4.5 about 59% of the time.
🔗 Source: Summary based on anthropic.com View Source | Found on Feb 17, 2026
India’s largest manufacturers are collaborating with global industrial software leaders Cadence, Siemens, and Synopsys to build AI factories accelerated by NVIDIA AI infrastructure, CUDA-X, and Omniverse libraries. The country is investing $134 billion in new manufacturing capacity across construction, automotive, renewable energy, and robotics. Reliance New Energy is expanding its collaboration with NVIDIA and Siemens for gigafactory simulation and design. Addverb Technologies uses Siemens Technomatix, NVIDIA Omniverse libraries, and Cosmos world foundation models for factory digital twins. Havells India Limited achieved 6x faster fluid dynamic simulations using Synopsys’ Ansys Fluent powered by CUDA-X.
🔗 Source: Summary based on blogs.nvidia.com View Source | Found on Feb 18, 2026
On February 17, 2026, Facebook company announced a multi-year strategic partnership with NVIDIA to advance its AI infrastructure roadmap, focusing on large-scale deployment of NVIDIA technology to support Meta’s AI-optimized data centers and core business. The collaboration aims to deliver substantial improvements in performance per watt for efficient AI operations at scale. NVIDIA Confidential Computing will be used for WhatsApp private messaging, ensuring user data confidentiality and integrity. Engineering teams from both companies will co-design across CPUs, GPUs, networking, and software to optimize state-of-the-art AI models for billions of users worldwide.
🔗 Source: Summary based on about.fb.com View Source | Found on Feb 17, 2026
The Linux Foundation Research, in partnership with Meta, released a report on February 18, 2026, detailing how open source innovation, workforce development, and digital public infrastructure are advancing India’s AI ecosystem. India’s AI market is projected to grow from $6 billion in 2024 to nearly $32 billion by 2031. Over 200,000 startups operate in India, with most relying on open technologies for cost-effective and locally tailored AI solutions. The report highlights rapid AI hiring growth and use cases such as medical chatbots and tools for farmers that improve access to essential services even in low-connectivity environments.
🔗 Source: Summary based on about.fb.com View Source | Found on Feb 18, 2026
In 2025, AI experienced a significant shift, becoming a proactive partner capable of reasoning and navigating the world. Google’s Responsible AI Progress Report highlights that responsible AI development is now fully integrated into product and research lifecycles. The company utilized robust testing processes, human expertise, and AI-enabled automation to mitigate risks as models became more capable, personalized, and multimodal. Their multi-layered governance approach operationalizes AI Principles across the entire lifecycle. The report details efforts such as forecasting floods for 700 million people and decoding the human genome to prevent blindness, emphasizing broad access and collaboration with governments, academics, and civil society.
🔗 Source: Summary based on blog.google View Source | Found on Feb 17, 2026
The article details the rapid expansion of AI-related financial structures, highlighting a debt-fueled capex cycle among hyperscalers such as Microsoft, NVIDIA, Amazon, Meta, Google, OpenAI, and Anthropic. In 2025, data centre spending soared and asset-backed securities tied to data centres surged. Meta’s US$30 billion Hyperion project exemplifies off-balance-sheet financing. A US$2+ trillion gap exists between current funding sources and buildout plans. Private credit lenders are underwriting 10–20-year assets despite GPU chips’ one-year economic life. Token prices are falling over 70% per year; inference workloads are high-token but low-monetisation. The first defaults are expected in 2027-2028 as lease terms expire.
🔗 Source: Summary based on man.com View Source | Found on Feb 19, 2026
The GAM Emerging Market Equity team highlights Asia’s critical role in the global AI ecosystem, with companies like TSMC, SK Hynix, Samsung Electronics, and MediaTek leading across hardware and integrated circuit design. The EM tech sector now comprises 29% of the MSCI EM Index, up from 21% in mid-2023. Cloud service providers are projected to spend USD 332 billion on capex in 2025 (up 52% year-on-year), with forecasts for over 20% growth in 2026. Memory demand is surging due to inference-based AI models, with DRAM shortages expected into early 2028 and industry backlogs unlikely cleared before late 2027.
🔗 Source: Summary based on gam.com View Source | Found on Feb 16, 2026
In January, US non-farm payrolls increased by 130,000 and the unemployment rate fell to 4.3% from 4.4%, though job growth for 2025 was revised down to 15,000 new positions per month from a previous estimate of 49,000. Job gains were mainly in healthcare and government sectors, while other sectors experienced losses. Wage growth stabilized and both job-finding and job-loss rates remained low. Core CPI rose 2.5% year-on-year, marking the slowest pace in five years. Anthropic raised USD 30 billion at a USD 380 billion valuation, and Alphabet issued 100-year sterling bonds for AI investments.
🔗 Source: Summary based on pictet.com View Source | Found on Feb 16, 2026
LSEG has launched Model-as-a-Service (MaaS), a new platform that allows financial institutions to host, distribute, and analyze models securely through a governed marketplace. Societe Generale has joined as a provider, making seven of its flagship datasets and analytics—covering Fixed Income, FX, ESG, and Equities—available via LSEG’s marketplace. Clients can access both Societe Generale’s and LSEG’s analytics in one integrated experience. MaaS leverages LSEG’s partnership with Microsoft to provide secure, scalable access to datasets and models without extra integration needs and uses Model Context Protocol connectors for direct delivery into AI ecosystems like Microsoft Copilot Studio.
🔗 Source: Summary based on lseg.com View Source | Found on Feb 19, 2026
A large-scale study by Xin Qiu, Junlong Tong, Yirong Sun, Yunpu Ma, Wei Zhang, and Xiaoyu Shen evaluates LLM-based time series forecasting (LLM4TSF) using 8 billion observations across 17 forecasting scenarios and 4 horizons. The research finds that LLM4TSF improves forecasting performance, particularly in cross-domain generalization. Pre-alignment strategies outperform post-alignment in over 90% of tasks. Both pretrained knowledge and model architecture contribute complementary strengths: pretraining is crucial under distribution shifts while architecture aids complex temporal modeling. Under large-scale mixed distributions, a fully intact LLM is indispensable as shown by token-level routing analysis and prompt-based improvements.
🔗 Source: Summary based on arxiv.org View Source | Found on Feb 17, 2026
FactorMiner, introduced by Yanlong Wang and colleagues on 16 February 2026, is a self-evolving agent framework designed for formulaic alpha factor mining in quantitative investment. It features a Modular Skill Architecture for systematic financial evaluation and an Experience Memory that distills historical mining trials into actionable insights. By implementing the Ralph Loop paradigm—retrieve, generate, evaluate, and distill—FactorMiner guides exploration using memory priors to reduce redundancy and focus on promising directions. Experiments across multiple datasets demonstrate FactorMiner’s ability to construct a diverse library of high-quality factors with competitive performance while maintaining low redundancy as the library scales.
🔗 Source: Summary based on arxiv.org View Source | Found on Feb 17, 2026
The article, authored by Srijan Sood, Kassiani Papasotiriou, Marius Vaiciulis, and Tucker Balch, presents a comparative study between model-free Deep Reinforcement Learning (DRL) and Mean-Variance Portfolio Optimization (MVO) for optimal portfolio allocation. The research details practical implementation of DRL for portfolio optimization and necessary adjustments for MVO. Backtest results show that the DRL agent demonstrates strong performance across metrics such as Sharpe ratio, maximum drawdowns, and absolute returns. The paper was published at the FinPlan'23 Workshop during the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023).
🔗 Source: Summary based on arxiv.org View Source | Found on Feb 20, 2026
The article, authored by Yaxuan Kong and nine others and submitted on 15 February 2026, highlights that Large Language Models (LLMs) are increasingly used in financial workflows, but evaluation practices lag behind. The authors identify five recurring biases—look-ahead bias, survivorship bias, narrative bias, objective bias, and cost bias—that can distort financial LLM applications. A review of 164 papers from 2023 to 2025 revealed that no single bias is discussed in more than 28 percent of studies. The paper proposes a Structural Validity Framework and an evaluation checklist to address these issues before deployment claims.
🔗 Source: Summary based on arxiv.org View Source | Found on Feb 17, 2026
The article, accepted to the WASSA Workshop at EACL 2026, presents an agentic data augmentation method for Aspect-Based Sentiment Analysis (ABSA) that uses iterative generation and verification to create high-quality synthetic training examples. The study compares this approach with a prompting-based baseline using the same model and instructions across three ABSA subtasks—Aspect Term Extraction, Aspect Sentiment Classification, and Aspect Sentiment Pair Extraction—on four SemEval datasets and two encoder-decoder models: T5-Base and Tk-Instruct. Results show agentic augmentation outperforms prompting in label preservation, especially for T5-Base, leading to higher gains when combined with real data.
🔗 Source: Summary based on arxiv.org View Source | Found on Feb 19, 2026
The article "A Computational Framework for Financial Structures" by Antonio Scala, submitted on 16 February 2026, presents a general computational representation for financial structures such as securitisations and insurance contracts. The framework separates the stochastic generation of inflows from deterministic allocation rules, with allocation rules, trigger conditions, and priority relations expressed as state-dependent operators mapping realised inflows to payments. This unified computational architecture enables consistent evaluation of performance and risk characteristics across different configurations and supports systematic analysis of structural design, risk distribution, and contractual transparency under uncertainty.
🔗 Source: Summary based on arxiv.org View Source | Found on Feb 17, 2026