ChatGPT’s arrival at the end of 2022 brought AI into the public consciousness. It sparked a debate about the role of artificial intelligence at work and in our ChatGPT’s arrival at the end of 2022 brought AI into the public consciousness. It sparked a debate about the role of artificial intelligence at work and in our

We’re already using AI, but are we using it well? That’s the question that will define 2026.

2026/01/21 17:40

ChatGPT’s arrival at the end of 2022 brought AI into the public consciousness. It sparked a debate about the role of artificial intelligence at work and in our personal lives. It also brought hype and hyperbole, predictions about the end of the world and the end of humanity. All of it driven by fear of the unknown. Fast forward two years, and as more people use AI – and become familiar with its advantages and limitations – the conversation has started to mature. 

The question for 2026 is no longer whether AI will replace jobs or rewire how organisations operate. What really matters is how we use it. Those who remove human-in-the-loop involvement risk building fast systems that make poor decisions. And that “computer says no (or yes)” reliance can make it hard for others to successfully challenge or present their point of view. But it’s the people who choose to work alongside AI, to use checks, balances, and, dare we say, common sense, who will define how the technology is used in 2026 and beyond. Those who deploy AI thoughtfully, transparently and in ways that embrace human judgement rather than sideline it will be winners in the long run.  

People-led automation, not machine-led workflows 

To that end, organisations seeing the most significant benefits from AI today are removing repetition rather than people from their processes. They’re using automation to reduce cognitive load so people can focus on interpretation, creativity, and decision-making. 

That’s a sensible approach because poorly implemented automation can accelerate errors or obscure how decisions were made. But when humans guide the logic, review the outputs, and set the boundaries, AI becomes a powerful extension of human capability rather than a substitute. 

Even where AI could theoretically run an entire end-to-end workflow, many teams still prefer the reassurance of human oversight. Offering both modes – zero-touch automation for those who want it, and optional checkpoints for those who don’t – will be a defining characteristic of responsible AI deployment in 2026. 

This approach lets teams build gradual trust in their AI and automation systems, understand how they behave, learn their quirks (and there are many), and still develop confidence in the underlying logic — all before embracing greater levels of automation. It’s like the dual controls on a driving instructor’s car during those early lessons. 

All that said, there’s still the question about whether graduates and those new to their profession need to learn those manual, repetitive tasks at all. Do they need to develop a feel for the processes they’re automating and build the confidence to question AI’s outputs? There’s no easy answer here. But it will be interesting to see how the first generation of AI-natives engage with the technology when they join the workforce.  

The rise of prompt literacy 

Every major technological transition requires a new skillset. That can be frightening or intimidating for some people. It was true during the Industrial Revolution, and it was true twenty years ago, when search engines arrived. Almost overnight, the need to conduct library-based research vanished. For some, this was a revolution. For others, it was a technical minefield for which they were not equipped. They held onto their microfiche and reference books before finally – and reluctantly – giving in and entering the new age. 

The majority of us quickly learned how to search the internet and, as search engines evolved, we subtly refined our queries in line with that evolution. We also learned which sources to trust (most of the time). 

AI requires the same kind of interaction. You need the ability to query the data, refine your request, and think critically about the responses it provides. In 2026, that means developing prompt literacy. Large language models are exceptionally good at giving plausible answers. But as with early search engines, the quality of the output depends almost entirely on the quality of the request. Put rubbish in, get rubbish out – as the old saying goes.  

Teams that learn how to question AI systems clearly, precisely and critically will gain a measurable advantage – whether they’re analysing data, drafting reports or automating routine tasks. This is about cognitive rather than technical fluency: the ability to frame a problem clearly, provide sufficient context, and interrogate the reliability of an output. It’s also knowing when to pause, question, and ask for evidence or alternative interpretations. Prompt literacy means acting as the interface between the AI and the information you’re curating. In practice, it also makes you the point of reference when colleagues want to understand or challenge the data. 

Prompt literacy also helps teams recognise the limits of AI. Models can surface valuable insights, but they can just as easily hallucinate, miss crucial context, or misinterpret what they’ve been asked to do. Teams that accept outputs without verification will be building decisions on foundations they can’t fully see. But, as with all forms of digital literacy, prompt literacy is learnable. So, organisations that invest in prompt-based literacy training alongside wider AI education over the next 12 months will see benefits as their staff gain confidence and competence. 

Transparency and trust 

But what does this mean for software developers in 2026? Hopefully, they’ve been engaging with their user bases and looking at ways to address fears, particularly around job losses and the risk of surrendering too much control. If they have, they’ll have been designing workflows where AI takes the strain of repetitive work – capturing data, routing items, suggesting classifications – but in ways users can interrogate, override, pause or slow down whenever needed. This will create an environment where people-led automation and prompt literacy can thrive, and where users can feel secure as they step up their use of the technology. 

For that to happen, users should have a direct connection to AI within their software solutions. Ideally, this should be at the prompt level, so they can directly question workings and outputs. We also think AI responses should include a confidence level so users can decide when to trust an output or to interrogate it further. We see it as a built-in guardrail that helps people question AI while they’re actually using the system. 

This will help users understand why the system made a particular recommendation, which data sources were used, and where human intervention remains essential. Displaying confidence levels also allows teams to set thresholds. They can decide whether they’re happy for AI to proceed without intervention when it is 60%, 75% or 95% confident in its response – and automatically route anything below that threshold for human review. 

AI and search convergence  

2026 will also be defined by the growing convergence between AI and search. Where prompt literacy equips us to ask better questions, search convergence requires that we refocus and reevaluate the results we’re presented with.   

We’re already seeing the limitations of this in everyday use. Ask a traditional search engine for a specific recipe, and it will retrieve the exact page you need. Ask an AI model, and you may receive a confident, plausible, but entirely invented set of ingredients and instructions. AI isn’t retrieving; it’s predicting – and prediction is not the same as truth. 

This differentiation is relatively harmless in low-stakes situations (although one of us very nearly had a cake-related disaster over the weekend thanks to ChatGPT!). The same behaviour in a commercial environment can be far more damaging. What if an AI system generated a close approximation of a regulatory threshold, incorrectly summarised a policy, or invented a financial definition based on patterns rather than facts? 

Large language models are now woven into mainstream search engines, offering summarised answers instead of traditional lists of sources. The experience is frictionless, but it also blurs the line between information retrieval and information generation. Teams may assume they’re reading a verified fact when, in reality, they’re reading an intelligent guess. For that reason, organisations need to help employees question, validate and cross-reference AI-driven search engine outputs. Verification has to be a core competency and can’t be treated as an afterthought. This training will be vital over the next 12 months. 

Thoughtful adopters vs unchecked users 

All these trends – prompt literacy, people-led automation, transparency, and convergence with search – point to a single conclusion: the major divide in 2026 won’t be between organisations that use AI and those that don’t. It will be between the ones who choose thoughtful adoption and those who let AI operate unchecked. Thoughtful adopters will ensure human oversight remains central and maintain visibility into how automated decisions are made. They’ll also develop AI-literate workforces capable of questioning outputs, and design systems that remain resilient as technology evolves. 

Unchecked adopters may gain early speed, but they’ll also accumulate risk. Errors compound more quickly in automated environments, especially when no one can explain how the system reached its conclusions. And as workflows become increasingly AI-driven, the cost of reversing poor decisions – or simply understanding them – will rise. 

Conclusions 

As the year unfolds, the winners in the AI stakes will be those who combine automation with clarity, human oversight with efficiency, and speed with sound judgement. Not those who use it most aggressively or across the broadest range of tasks. 

AI is already powerful; no one disputes that. But in 2026, the challenge and opportunity will be to ensure it is trustworthy, transparent, and thoughtfully applied. Ethics, common sense, education, and strong guardrails will be key here. Forward-thinking organisations will use AI as a collaborator rather than a shortcut. They’ll also recognise that human intelligence is key to fully benefiting from artificial intelligence. 

시장 기회
하이퍼리퀴드 로고
하이퍼리퀴드 가격(HYPE)
$20.75
$20.75$20.75
-6.15%
USD
하이퍼리퀴드 (HYPE) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, service@support.mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

추천 콘텐츠

Hong Kong Backs Commercial Bank Tokenized Deposits in 2025

Hong Kong Backs Commercial Bank Tokenized Deposits in 2025

The post Hong Kong Backs Commercial Bank Tokenized Deposits in 2025 appeared on BitcoinEthereumNews.com. HKMA to support tokenized deposits and regular issuance of digital bonds. SFC drafting licensing framework for trading, custody, and stablecoin issuers. New rules will cover stablecoin issuers, digital asset trading, and custody services. Hong Kong is stepping up its digital finance ambitions with a policy blueprint that places tokenization at the core of banking innovation.  In the 2025 Policy Address, Chief Executive John Lee outlined measures that will see the Hong Kong Monetary Authority (HKMA) encourage commercial banks to roll out tokenized deposits and expand the city’s live tokenized-asset transactions. Hong Kong’s Project Ensemble to Drive Tokenized Deposits Lee confirmed that the HKMA will “continue to take forward Project Ensemble, including encouraging commercial banks to introduce tokenised deposits, and promoting live transactions of tokenised assets, such as the settlement of tokenised money market funds with tokenised deposits.” The initiative aims to embed tokenized deposits, bank liabilities represented as blockchain-based tokens, into mainstream financial operations. These deposits could facilitate the settlement of money-market funds and other financial instruments more quickly and efficiently. To ensure a controlled rollout, the HKMA will utilize its regulatory sandbox to enable banks to test tokenized products while enhancing risk management. Tokenized Bonds to Become a Regular Feature Beyond deposits, the government intends to make tokenized bond issuance a permanent element of Hong Kong’s financial markets. After successful pilots, including green bonds, the HKMA will help regularize the issuance process to build deep and liquid markets for digital bonds accessible to both local and international investors. Related: Beijing Blocks State-Owned Firms From Stablecoin Businesses in Hong Kong Hong Kong’s Global Financial Role The policy address also set out a comprehensive regulatory framework for digital assets. Hong Kong is implementing a regime for stablecoin issuers and drafting licensing rules for digital asset trading and custody services. The Securities…
공유하기
BitcoinEthereumNews2025/09/18 07:10
Report: Galaxy to Launch $100 Million Crypto Hedge Fund in Q1

Report: Galaxy to Launch $100 Million Crypto Hedge Fund in Q1

The post Report: Galaxy to Launch $100 Million Crypto Hedge Fund in Q1 appeared on BitcoinEthereumNews.com. Galaxy is launching a $100 million hedge fund to trade
공유하기
BitcoinEthereumNews2026/01/21 19:49
Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025

Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025

BitcoinWorld Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025 Are you ready to witness a phenomenon? The world of technology is abuzz with the incredible rise of Lovable AI, a startup that’s not just breaking records but rewriting the rulebook for rapid growth. Imagine creating powerful apps and websites just by speaking to an AI – that’s the magic Lovable brings to the masses. This groundbreaking approach has propelled the company into the spotlight, making it one of the fastest-growing software firms in history. And now, the visionary behind this sensation, co-founder and CEO Anton Osika, is set to share his invaluable insights on the Disrupt Stage at the highly anticipated Bitcoin World Disrupt 2025. If you’re a founder, investor, or tech enthusiast eager to understand the future of innovation, this is an event you cannot afford to miss. Lovable AI’s Meteoric Ascent: Redefining Software Creation In an era where digital transformation is paramount, Lovable AI has emerged as a true game-changer. Its core premise is deceptively simple yet profoundly impactful: democratize software creation. By enabling anyone to build applications and websites through intuitive AI conversations, Lovable is empowering the vast majority of individuals who lack coding skills to transform their ideas into tangible digital products. This mission has resonated globally, leading to unprecedented momentum. The numbers speak for themselves: Achieved an astonishing $100 million Annual Recurring Revenue (ARR) in less than a year. Successfully raised a $200 million Series A funding round, valuing the company at $1.8 billion, led by industry giant Accel. Is currently fielding unsolicited investor offers, pushing its valuation towards an incredible $4 billion. As industry reports suggest, investors are unequivocally “loving Lovable,” and it’s clear why. This isn’t just about impressive financial metrics; it’s about a company that has tapped into a fundamental need, offering a solution that is both innovative and accessible. The rapid scaling of Lovable AI provides a compelling case study for any entrepreneur aiming for similar exponential growth. The Visionary Behind the Hype: Anton Osika’s Journey to Innovation Every groundbreaking company has a driving force, and for Lovable, that force is co-founder and CEO Anton Osika. His journey is as fascinating as his company’s success. A physicist by training, Osika previously contributed to the cutting-edge research at CERN, the European Organization for Nuclear Research. This deep technical background, combined with his entrepreneurial spirit, has been instrumental in Lovable’s rapid ascent. Before Lovable, he honed his skills as a co-founder of Depict.ai and a Founding Engineer at Sana. Based in Stockholm, Osika has masterfully steered Lovable from a nascent idea to a global phenomenon in record time. His leadership embodies a unique blend of profound technical understanding and a keen, consumer-first vision. At Bitcoin World Disrupt 2025, attendees will have the rare opportunity to hear directly from Osika about what it truly takes to build a brand that not only scales at an incredible pace in a fiercely competitive market but also adeptly manages the intense cultural conversations that inevitably accompany such swift and significant success. His insights will be crucial for anyone looking to understand the dynamics of high-growth tech leadership. Unpacking Consumer Tech Innovation at Bitcoin World Disrupt 2025 The 20th anniversary of Bitcoin World is set to be marked by a truly special event: Bitcoin World Disrupt 2025. From October 27–29, Moscone West in San Francisco will transform into the epicenter of innovation, gathering over 10,000 founders, investors, and tech leaders. It’s the ideal platform to explore the future of consumer tech innovation, and Anton Osika’s presence on the Disrupt Stage is a highlight. His session will delve into how Lovable is not just participating in but actively shaping the next wave of consumer-facing technologies. Why is this session particularly relevant for those interested in the future of consumer experiences? Osika’s discussion will go beyond the superficial, offering a deep dive into the strategies that have allowed Lovable to carve out a unique category in a market long thought to be saturated. Attendees will gain a front-row seat to understanding how to identify unmet consumer needs, leverage advanced AI to meet those needs, and build a product that captivates users globally. The event itself promises a rich tapestry of ideas and networking opportunities: For Founders: Sharpen your pitch and connect with potential investors. For Investors: Discover the next breakout startup poised for massive growth. For Innovators: Claim your spot at the forefront of technological advancements. The insights shared regarding consumer tech innovation at this event will be invaluable for anyone looking to navigate the complexities and capitalize on the opportunities within this dynamic sector. Mastering Startup Growth Strategies: A Blueprint for the Future Lovable’s journey isn’t just another startup success story; it’s a meticulously crafted blueprint for effective startup growth strategies in the modern era. Anton Osika’s experience offers a rare glimpse into the practicalities of scaling a business at breakneck speed while maintaining product integrity and managing external pressures. For entrepreneurs and aspiring tech leaders, his talk will serve as a masterclass in several critical areas: Strategy Focus Key Takeaways from Lovable’s Journey Rapid Scaling How to build infrastructure and teams that support exponential user and revenue growth without compromising quality. Product-Market Fit Identifying a significant, underserved market (the 99% who can’t code) and developing a truly innovative solution (AI-powered app creation). Investor Relations Balancing intense investor interest and pressure with a steadfast focus on product development and long-term vision. Category Creation Carving out an entirely new niche by democratizing complex technologies, rather than competing in existing crowded markets. Understanding these startup growth strategies is essential for anyone aiming to build a resilient and impactful consumer experience. Osika’s session will provide actionable insights into how to replicate elements of Lovable’s success, offering guidance on navigating challenges from product development to market penetration and investor management. Conclusion: Seize the Future of Tech The story of Lovable, under the astute leadership of Anton Osika, is a testament to the power of innovative ideas meeting flawless execution. Their remarkable journey from concept to a multi-billion-dollar valuation in record time is a compelling narrative for anyone interested in the future of technology. By democratizing software creation through Lovable AI, they are not just building a company; they are fostering a new generation of creators. His appearance at Bitcoin World Disrupt 2025 is an unmissable opportunity to gain direct insights from a leader who is truly shaping the landscape of consumer tech innovation. Don’t miss this chance to learn about cutting-edge startup growth strategies and secure your front-row seat to the future. Register now and save up to $668 before Regular Bird rates end on September 26. To learn more about the latest AI market trends, explore our article on key developments shaping AI features. This post Lovable AI’s Astonishing Rise: Anton Osika Reveals Startup Secrets at Bitcoin World Disrupt 2025 first appeared on BitcoinWorld.
공유하기
Coinstats2025/09/17 23:40