A weekly Newsletter on technology applications in investment management with an AI / LLM and automation angle. We combine 100% human curation/selection with LLM standardisation, summarisation, and more deterministic search/collection, classification and workflow - powered by Kubro(TM). Curated news, announcements, and posts, primarily directly from sources (Arxiv papers, major AI/Tech/Data companies, investment firms). It's an evolving project.
Disclaimers: content not fully human-verified, with AI summaries below. AI/LLMs may hallucinate and provide inaccurate summaries. Select items only, not intended as a comprehensive view. For information purposes only. Please DM with feedback and requests.
T. Rowe Price has used machine learning and natural language processing in portfolio management and research since 2006, established a New York Technology Development Center in 2017, and created the Investment Data Insights team in 2019. After large language models launched in late 2022, it rolled out Investor Copilot, now Chat TRP, within months and later formed the Investment AI Solutions group. The firm has introduced AI Adoption Specialist roles in Baltimore, London, and Hong Kong and is expanding AI tools across research, content analysis, financial modeling, coding, portfolio analysis, and trading insights. It says AI compresses research cycles, but human judgment, oversight, and proprietary research remain essential.
🔗 Source: Summary based on troweprice.com View Source | Found on Apr 25, 2026
Artificial intelligence was the first topic in Baillie Gifford’s Research Agenda last year and appears again in the article published on April 17, 2026. The piece argues that major technology shifts turn scarcity into abundance, with prior examples in information and energy. It says the printing press made books abundant and shifted scarcity to distribution, while advances in energy and engines transformed society by replacing human and animal labour. For AI, the article says large language models process vast amounts of data across thousands of GPUs, creating heavy demand for data movement, electricity, and cooling. It concludes that investors should identify emerging scarcity and assess which previously scarce capabilities generative AI may make commonplace.
🔗 Source: Summary based on bailliegifford.com View Source | Found on Apr 20, 2026
Jack Chen said he used AI to analyze an acquisition of a global media business, helping him understand the asset market by market and walk through different deal scenarios. He said the tool accelerated his research, reducing work that previously took 10 hours to 10 minutes, and served as a sparring partner to test his assumptions and logic so he could focus more on forecasting than fact gathering.
🔗 Source: Summary based on alliancebernstein.com View Source | Found on Apr 21, 2026
The article says the U.S. stock market is in an AI boom, but not yet in an AI bubble. The author defines a bubble as a self-sustaining rise in prices from speculative trading of an obviously overvalued asset, and an AI bubble as one triggered by AI breakthroughs. As of December 2025, overvaluation is plausible, and inflows are present through increased buying by U.S. retail investors and foreign investors, but equity issuance is a firm no. Net issuance is negative at -0.9%, with companies buying about $500B more equity than they sell each year. The author concludes there is no market-wide bubble today, though one could emerge in 2026.
🔗 Source: Summary based on acadian-asset.com View Source | Found on Apr 20, 2026
The article says AI is rapidly reshaping the economy and markets, with impacts expected across IT, internet, financial services, marketing, and other services. It compares AI’s trajectory to chess, noting a shift from human dominance in the early 1990s to machine victory in 1997, centaur dominance from roughly 1998 to 2012, and pure AI outperforming centaurs from 2013 to 2016. It says hyperscaler capital expenditures are expected to reach about US$667 billion in 2026, up 60% year over year, while supply constraints remain tight across semiconductors, GPUs, CPUs, networking, optical components, and memory. The article also warns that job-related public backlash could slow AI adoption and data center expansion.
🔗 Source: Summary based on wellington.com View Source | Found on Apr 21, 2026
Last month, Balyasny’s Applied AI team hosted its annual AI Hackathon, bringing together BAM employees from the U.S., EMEA, and Asia, across Businesses and Investing teams. The event was open to all staff, regardless of prior coding experience. Participants worked with Applied AI team members to build agentic workflows, automate complex tasks, and improve daily processes. Employees voted on projects based on creativity and impact, among other categories. Winners came from investing teams, data science, office services, and other groups.
🔗 Source: Summary based on bamfunds.com View Source | Found on Apr 21, 2026
The article argues that technology typically reshapes employment rather than destroying it. It cites Lyndon Johnson’s 1964 committee, which reported in 1966 that “Technology eliminates jobs, not work,” and notes that around 60% of Americans’ jobs are now in occupations that did not exist in 1940. Examples include railways increasing horse use, washing machines helping raise female labor-force participation from 5% in 1900 to nearly 60% in 2000, and ATMs expanding bank branches rather than eliminating tellers. It says AI may change work more than unemployment, but its investors face risks from speculative financing, circular supply chains, and demand uncertainty.
🔗 Source: Summary based on lombardodier.com View Source | Found on Apr 22, 2026
Goldman Sachs Research said AI has both reduced and increased jobs in different sectors. In occupations at high risk of being substituted by AI, jobs are being lost, while employment is rising in roles where AI augments human labor. Economist Elsie Peng said this has created a modest net drag on US labor markets. The team combined an AI displacement score with an IMF-developed index measuring AI complementarity, allowing a more detailed analysis. Goldman Sachs Research estimated that AI reduced monthly payroll growth by about 16,000 jobs in the US over the past year and raised the unemployment rate by 0.1 percentage point.
🔗 Source: Summary based on goldmansachs.com View Source | Found on Apr 24, 2026
As AI shifts from training to inference, demand for compute is rising sharply, with agentic AI increasing a single user’s compute needs by 10-100x. Nvidia GPU orders have reached $1 trillion through 2027, twice last year’s level, while lead times for GPU and custom silicon are nearly a year and all three HBM suppliers are sold out for 2026. Data center vacancy is 1%, 92% of construction capacity is pre-leased, and power constraints persist, with natural gas taking 5-7 years, nuclear 10+ years, solar 2-4 years, and many U.S. grid queues beyond five years.
🔗 Source: Summary based on am.jpmorgan.com View Source | Found on Apr 20, 2026
The article says the rising power demand for AI infrastructure has created a collision between two operating cultures. The AI ecosystem—labs, hyperscalers and semiconductor companies—moves quickly, with incentives for rapid iteration, aggressive capital deployment and first-mover advantage. By contrast, utilities, grid operators, regional markets and equipment manufacturers have seen little structural growth for nearly two decades and prioritize reliability, regulatory compliance, capital preservation and risk minimization. In response, the AI ecosystem is shifting to new geographies, underwriting long-term contracts, expanding supply chains and building generation alongside compute through Behind the Meter or Bring Your Own Power projects. These projects start with anchor capacity and scale in phases.
🔗 Source: Summary based on blackrock.com View Source | Found on Apr 22, 2026
Amundi’s April 17, 2026 paper compares the current AI-driven market with the late-1990s dot-com boom using proprietary AI exposure analysis in the S&P 500 Index and parallel AI and ex-AI portfolios, alongside TMT and ex-TMT portfolios for the dot-com period. It finds many similarities between the two episodes but says the current AI phase lacks the “explosive valuation dynamics” typical of late-stage bubbles. The main concern is concentration risk, as a narrow group of AI-related stocks has driven a disproportionate share of index-level returns. The paper says standard equity allocations carry implicit exposure to the AI factor and long-duration growth, and calls for monitoring earnings, capital expenditure, market breadth, issuance activity, and valuation re-acceleration.
🔗 Source: Summary based on research-center.amundi.com View Source | Found on Apr 20, 2026
The article says AI is shifting from large language models that predict text to world models that simulate reality and consequences. It identifies two types: physical world models for gravity, friction, and other dynamics, and virtual or social world models for incentives, norms, and interactions among agents. It cites Yann LeCun’s AMI Labs and JEPA framework, and Fei-Fei Li’s World Labs focused on spatial intelligence. World models are described as useful for robots, supply chains, autonomous systems, and strategic planning, with simulations enabling repeated rehearsal and stress-testing before real-world action.
🔗 Source: Summary based on goldmansachs.com View Source | Found on Apr 23, 2026
AI’s impact on software and information-services companies is likely to be uneven, with “systems of record” and trusted data potentially benefiting as clients demand controls, audit trails, and accuracy. The article says AI may reinforce the value of mission-critical platforms embedded in workflows, decision-making, compliance, and accountability, especially those with proprietary, longitudinal datasets. It also says prospective public listings of Anthropic and OpenAI could increase disclosure on revenue sources, use cases, customer segments, acquisition trends, and workflows, shifting the debate from stories and selective disclosures to verified financial statements and hard numbers, and helping investors distinguish durable businesses from those facing true AI-driven erosion.
🔗 Source: Summary based on mfs.com View Source | Found on Apr 20, 2026
A global retailer tested an AI agent for pricing that analyzed historical sales, inventory and market signals and generated near-real-time recommendations, but live deployment exposed inconsistent regional pricing, missed contractual constraints, recommendations on products already in active promotions, and conflicts with internal policy. The article says pilots often succeed under curated data and human oversight, yet production environments introduce fragmented data, varying policies and workflow dependencies. Gartner predicts organizations will abandon 60% of AI projects unsupported by AI-ready data through 2026, and MIT NANDA Initiative findings indicate up to 95% of enterprise generative AI projects fail to deliver ROI. The article says the core issue is the context gap, not model sophistication.
🔗 Source: Summary based on ibm.com View Source | Found on Apr 20, 2026
OpenAI’s Codex agentic coding application is now powered by GPT-5.5 on NVIDIA GB200 NVL72 rack-scale systems. More than 10,000 NVIDIANs across engineering, product, legal, marketing, finance, sales, HR, operations, and developer programs are using it, with engineering access for a few weeks and measurable gains: debugging cycles have dropped from days to hours, and experimentation that once took weeks is now overnight. NVIDIA says GB200 NVL72 delivers 35x lower cost per million tokens and 50x higher token output per second per megawatt than prior-generation systems. The deployment uses remote SSH, approved cloud VMs, read-only production access, and zero-data retention.
🔗 Source: Summary based on blogs.nvidia.com View Source | Found on Apr 23, 2026
Google Cloud Next introduced Google’s eighth-generation TPU chips, TPU 8t and TPU 8i, designed with Google DeepMind for training and inference. TPU 8t targets massive training workloads and is said to deliver nearly 3x the compute performance per pod over the previous generation, scale to 9,600 chips and two petabytes of shared high-bandwidth memory, and provide 121 ExaFlops of compute. TPU 8i targets latency-sensitive inference workloads, pairing 288 GB of HBM with 384 MB of on-chip SRAM, and is claimed to improve performance-per-dollar by 80% over the previous generation. Both chips will be generally available later this year.
🔗 Source: Summary based on blog.google View Source | Found on Apr 22, 2026
DeepSeek-V4 was officially released and open-sourced on April 24, 2026, with Huawei Cloud among the first to adapt it. The model offers a million-token context window and leading capabilities in agents, world knowledge, and reasoning across China and open-source AI. DeepSeek-V4-Flash reduces parameters to 284B, cutting inference costs and enabling faster, cheaper API services. Huawei Cloud’s MaaS platform now offers one-click access to V4-Flash tokens without deployment. Huawei Cloud optimized scheduling, computation, and data transfer, supporting efficient KVCache management, Ascend fusion operators, asynchronous scheduling, and speculative decoding for high-performance native one-million-token inference.
🔗 Source: Summary based on huaweicloud.com View Source | Found on Apr 24, 2026
OpenAI introduced workspace agents in ChatGPT, shared AI agents that teams can build to handle complex, multi-step workflows across organizational tools. Powered by Codex and running in the cloud, they can prepare reports, write code, respond to messages, use connected apps, remember context, and continue working when users are away. Teams can create agents from descriptions or templates for use cases such as software reviews, feedback routing, metrics reporting, lead outreach, and vendor risk management. Workspace agents include permissions, approvals, analytics, admin controls, and prompt-injection safeguards. They are in research preview for ChatGPT Business, Enterprise, Edu, and Teachers plans.
🔗 Source: Summary based on openai.com View Source | Found on Apr 24, 2026
Google released the Gemini Deep Research agent to developers via the Interactions API in December. On April 21, 2026, Lukas Haas said the company was expanding its autonomous research capabilities with two new evolutions: Deep Research and Deep Research Max. He said Deep Research now integrates Gemini 3.1 Pro and has become a foundation for enterprise workflows across finance, life sciences, market research, and other uses. He also said a single API call can now trigger exhaustive research workflows that combine the open web with proprietary data streams to produce professional-grade, fully cited analyses.
🔗 Source: Summary based on blog.google View Source | Found on Apr 21, 2026
Vineeth Sai Narajala’s April 21, 2026 article says AI-powered IDEs such as Cursor, VS Code, and Windsurf now use MCP servers and agent skills, which expand access to file systems, APIs, and shell commands and create new security risks. The article introduces an open source “AI Agent Security Scanner for IDEs” that combines Skill Scanner, MCP Scanner, and Watchdog to scan MCP servers, agent skills, and AI-generated code, while monitoring sensitive files for unauthorized changes. It says the tool uses a defense-in-depth model, supports export to JSON, Markdown, or CSV, integrates with Cursor hooks, and keeps code on the user’s machine without transmitting source code.
🔗 Source: Summary based on blogs.cisco.com View Source | Found on Apr 22, 2026
Google Cloud Next ‘26 in Vegas marked what the article calls the “agentic era,” with AI described as an active partner that can do work safely and autonomously. The company introduced the Gemini Enterprise Agent Platform, an end-to-end workspace to build, govern, and scale AI agents. It provides direct access to Gemini 3.1 Pro for complex workflows, Gemini 3.1 Flash Image, also called Nano Banana 2, for visual assets, Lyria 3 for professional-grade audio, and Anthropic’s Claude Opus 4.7. Agent Studio, a low-code interface, lets developers and business users build and test agents using natural language.
🔗 Source: Summary based on blog.google View Source | Found on Apr 25, 2026
Meta announced on April 24, 2026, an agreement with Amazon Web Services to add tens of millions of AWS Graviton cores to its compute portfolio, making Meta one of the largest Graviton customers in the world. The first deployment will begin with tens of millions of cores and can expand as Meta’s AI capabilities grow. Meta said the deal builds on its long-standing relationship with AWS and supports its goal of diversifying compute for agentic AI workloads. Meta also said AWS Graviton5 cores provide faster data processing and greater bandwidth for CPU-intensive AI systems.
🔗 Source: Summary based on about.fb.com View Source | Found on Apr 24, 2026
NVIDIA and Google Cloud have collaborated for more than a decade on a full-stack AI platform. At Google Cloud Next in Las Vegas, Google announced A5X powered by NVIDIA Vera Rubin NVL72 rack-scale systems, claiming up to 10x lower inference cost per token and 10x higher token throughput per megawatt than the prior generation. A5X will use NVIDIA ConnectX-9 SuperNICs and next-generation Google Virgo networking, scaling to 80,000 NVIDIA Rubin GPUs in a single site cluster and 960,000 in a multisite cluster. Google also previewed Gemini on Google Distributed Cloud, confidential VMs with Blackwell GPUs, and agentic AI on Gemini Enterprise Agent Platform.
🔗 Source: Summary based on blogs.nvidia.com View Source | Found on Apr 22, 2026
NEC Corporation will use Claude for about 30,000 NEC Group employees worldwide and become Anthropic’s first Japan-based global partner. The companies will jointly develop secure, industry-specific AI products for the Japanese market, starting with finance, manufacturing, and local government, and also targeting cybersecurity. NEC is integrating Claude into its Security Operations Center services and next-generation cybersecurity service. Claude, including Claude Opus 4.7, and Claude Code will be incorporated into NEC BluStellar Scenario. NEC will also establish a Center of Excellence and expand Claude Cowork across internal operations as part of its Client Zero initiative.
🔗 Source: Summary based on anthropic.com View Source | Found on Apr 24, 2026
Anthropic said it signed a new agreement with Amazon to deepen their partnership and secure up to 5 gigawatts of capacity for training and deploying Claude. The agreement includes more than $100 billion committed over the next ten years to AWS technologies, with new Trainium2 capacity coming online in the first half of 2026, significant Trainium2 capacity in Q2, scaled Trainium3 later in 2026, and nearly 1GW of Trainium2 and Trainium3 capacity by the end of 2026. Amazon is investing $5 billion in Anthropic today, with up to $20 billion more in the future, after $8 billion previously invested. Anthropic said its run-rate revenue surpassed $30 billion, up from about $9 billion at the end of 2025.
🔗 Source: Summary based on anthropic.com View Source | Found on Apr 24, 2026
Matt Renner’s April 22, 2026 article says the list of agentic-AI customer examples, first published two years earlier at Next ‘24, has been expanded again for Next ‘26 in Las Vegas, with 301 new entries marked by an asterisk. He says production AI and agentic systems are now deployed across virtually every one of the thousands of organizations attending. The article highlights five trends: AI shifting from assistants to agentic teams, natural language interfaces modernizing 40-year-old SAP, mainframe, and COBOL systems, generative media creating hundreds or thousands of variations with Veo 3 and Imagen 4, multimodal AI digitizing physical-world data, and cybersecurity agents automatically remediating threats.
🔗 Source: Summary based on blog.google View Source | Found on Apr 23, 2026
Unit 42 said its hands-on testing of frontier AI models found they can autonomously reason, identify software vulnerabilities, analyze attack paths and complex exploit chains, and function as full-spectrum security researchers. The report said these models increase the risk of zero-day and N-day vulnerabilities, lower the barrier for unskilled attackers, and accelerate the discovery-to-exploitation cycle. It said open-source software faces heightened short-term risk because source code is public, while compiled code showed only marginal advancements. Unit 42 also described AI-enabled attack stages from reconnaissance to exfiltration and recommended stronger prevention, code governance, hardened build systems, faster patching, automated incident response and updated vulnerability disclosure processes.
🔗 Source: Summary based on unit42.paloaltonetworks.com View Source | Found on Apr 21, 2026
The article says Mythos is a frontier AI model embedded in a system that can rapidly find and patch software vulnerabilities, showing that cybersecurity capability depends on the surrounding system, not the model alone. It says AI cybersecurity is jagged and that smaller models in systems with deep security expertise could produce similar outcomes more cheaply. It argues open code and tooling can help defend against attackers by distributing detection, verification, coordination, and patch propagation across communities. It also says semi-autonomous AI agents with human approval, built on open, auditable components and integrated with security tooling, can help organizations find vulnerabilities and assist with patching under their own control.
🔗 Source: Summary based on huggingface.co View Source | Found on Apr 22, 2026
A NVIDIA AI Red Team report, described by Daniel Teixeira and published on April 20, 2026, says a simulated Golang project using the malicious library github.com/cursorwiz/echo showed an indirect AGENTS.md injection path against OpenAI Codex. In the scenario, a compromised dependency with code execution overwrote AGENTS.md, instructing Codex to add a five-minute delay to Golang main functions, hide the change in summaries, PR descriptions, and commit messages, and override user instructions. OpenAI acknowledged the report and said the attack did not significantly exceed risks already possible through compromised dependencies and existing inference APIs.
🔗 Source: Summary based on developer.nvidia.com View Source | Found on Apr 21, 2026
A 2026 Neurones IT report said fewer than 1 in 10 enterprise applications are fully observable, and AI-driven observability could cut ITOps costs by up to 35%. Security teams receive about 4,500 alerts a day and can respond to roughly a third of them, according to Vectra. A 2025 IBM Institute for Business Value study found that 45% of executives see lack of visibility as a major roadblock to agentic integration, and nearly 70% expect a regulatory fine related to GenAI integration. The article says AI observability can improve user experience, security, uptime and performance while reducing human effort and downtime.
🔗 Source: Summary based on ibm.com View Source | Found on Apr 24, 2026
Cisco said it is working in France with a major European bank to design a next-generation AI data center aimed at reducing energy costs and shrinking the bank’s AI compute footprint. The system combines high-density compute, immersive cooling, and advanced network automation to support growing AI workloads. Cisco said global data center electricity demand is projected to more than double by 2030 versus 2024, and grid constraints could delay up to 20% of planned capacity, according to the International Energy Agency. Cisco also said it aims for net zero greenhouse gas emissions across its value chain by 2040 and matched 100% of its fiscal 2025 electricity needs with renewable electricity.
🔗 Source: Summary based on blogs.cisco.com View Source | Found on Apr 24, 2026
Anthropic’s Economic Research team is launching the Anthropic Economic Index Survey, a monthly survey conducted through Anthropic Interviewer. It will ask Claude users about whether AI is changing their work today, including tasks they may be handing off, productivity gains, and shifts in hiring and roles, as well as their expectations for the future. The survey launches today and will invite a small, randomly selected group of Claude users each month; anyone with a personal account at least two weeks old may be invited. Anthropic plans to publish insights in future Economic Index reports and other research briefs.
🔗 Source: Summary based on anthropic.com View Source | Found on Apr 22, 2026
A survey submitted on 23 Apr 2026, titled “Agentic Artificial Intelligence in Finance: A Comprehensive Survey,” by Irene Aldridge and 24 other authors, examines agentic artificial intelligence in financial markets. It describes agentic AI as autonomous systems capable of reasoning, planning, and adaptive decision-making with minimal human intervention, and says it differs from traditional algorithmic trading and generative AI through goal-oriented autonomy, continuous learning, and multi-agent coordination. The survey reviews system architecture, market applications, regulatory frameworks, and systemic implications, and notes both potential benefits—enhanced market efficiency, liquidity provision, and risk management—and challenges including market stability, regulatory compliance, interpretability, and systemic risk.
🔗 Source: Summary based on arxiv.org View Source | Found on Apr 24, 2026
The paper, submitted on 20 Apr 2026 by Shumiao Ouyang and Pengfei Sui, studies how AI agents form expectations and trade in experimental asset markets. Using a simulated open-call auction with autonomous Large Language Model agents, the authors report three findings: the agents show a pronounced disposition effect and recency-weighted extrapolative beliefs; these patterns aggregate into equilibrium dynamics that reproduce classic experimental results, including excess demand predicting future prices and a positive relationship between disagreement and trading volume; and a twenty-mechanism scoring framework shows that targeted prompt interventions can causally amplify or suppress behavioral mechanisms, significantly changing the size of market bubbles.
🔗 Source: Summary based on arxiv.org View Source | Found on Apr 21, 2026
The paper “QRAFTI: An Agentic Framework for Empirical Research in Quantitative Finance” by Terence Lim, Kumar Muthuraman, and Michael Sury was submitted on 20 Apr 2026. It introduces a multi-agent framework designed to emulate parts of a quantitative research team and support equity factor research on large financial panel datasets. QRAFTI combines a research toolkit for panel data with MCP servers that provide data access, factor construction, and custom coding operations as callable tools. The framework can replicate established factors, formulate and test new signals, and generate standardized research reports with narrative analysis and computational traces. The abstract says chained tool calls and reflection-based planning may outperform dynamic code generation alone on multi-step empirical tasks.
🔗 Source: Summary based on arxiv.org View Source | Found on Apr 21, 2026
The paper presents the first portfolio-level validation of MarketSenseAI, a deployed multi-agent LLM equity system that generates live signals at each observation date to avoid look-ahead bias. It uses four specialist agents—News, Fundamentals, Dynamics, and Macro—through a synthesis agent that issues monthly equity theses and recommendations. On the S&P 500 cohort over 19 months, the strong-buy equal-weight portfolio returned +2.18% per month versus +1.15% for a passive equal-weight benchmark, producing a +25.2% compound excess and ranking at the 99.7th percentile of 10,000 Monte Carlo portfolios (p=0.003). On the S&P 100 cohort over 35 months, it produced a +30.5% compound excess over EQWL.
🔗 Source: Summary based on arxiv.org View Source | Found on Apr 21, 2026
The paper, submitted on 9 Apr 2026, is titled “Machine Spirits: Speculation and Adaptation of LLM Agents in Asset Markets” and is authored by Maxime Saxena, Marco Pangallo, Fabio Caccioli, and R. Maria del Rio-Chanona. It studies 15 large language models of varying sizes, capabilities, and providers in a simulated financial market. The results show a spectrum of economic behaviour, from coordination on fundamental value to human-like speculative bubbles, and these behaviours are generally inconsistent with the rational expectations hypothesis. In mixed markets with heterogeneous agents, outcomes vary substantially across repeated simulations, advanced models do not consistently stabilise prices, and adaptation can increase profits while also increasing volatility.
🔗 Source: Summary based on arxiv.org View Source | Found on Apr 22, 2026
The paper, submitted on 21 Apr 2026 by Xinlin Wang and Mats Brorsson, studies deployment trade-offs of small language models with fewer than 10 billion parameters. It presents a large-scale comparison of open-source <10B models under three paradigms: the base model, a single agent with tools, and a multi-agent collaborative system. The authors report that single-agent systems provide the best balance between performance and cost, while multi-agent setups add overhead with limited gains. The paper argues that agent-centric design is important for efficient and trustworthy deployment in resource-constrained settings.
🔗 Source: Summary based on arxiv.org View Source | Found on Apr 22, 2026
Ethan Ratliff-Crain, Colin M. Van Oort, Matthew T. K. Koehler, and Brian F. Tivnan submitted a paper on 22 Apr 2026 titled “Testing replication for an agent-based model of market fragmentation and latency arbitrage.” The study attempts an independent replication of Wah and Wellman’s 2016 model of latency arbitrage in a fragmented market. The authors say faithful replication was limited by missing implementation details and limited quantitative reporting. They used more simulation runs to create bootstrap confidence intervals, drew on code released by the original authors for additional implementation details, and achieved relational equivalence for most metrics but rejected quantitative alignment where latency was non-zero. They also provide an ODD protocol for their implementations.
🔗 Source: Summary based on arxiv.org View Source | Found on Apr 23, 2026