The Truth About Building AI Startups Today

Author Avatar

athompson

Joined: Mar 2024
Spread the love

The Truth About Building AI Startups Today


In the first episode of the Lightcone Podcast, YC Group Partners dig into everything they have learned working with the top founders building AI startups today. They share the ideas that are working particularly well, mistakes to avoid, and take a look at the competitive landscape among the current AI giants.

Chapters (Powered by…

source

Reviews

0 %

User Score

0 ratings
Rate This

Sharing

Leave your comment

Your email address will not be published. Required fields are marked *

52 Comments

  1. The government contract search toll is worthless because that’s not how the proposal search process works. Isn’t it a little fool hardy to be investing in tools that purports to solve problems in a domain in which none of you are domain experts?

  2. Do you guys think blockchain will make a comeback for authentication as ai keeps on growing, specially the agi , as the deepfake problems rise ? I mean with the boom of ai the next logical step seems like web3, doesn’t it? Please chime in everyone, what do you think of this?

  3. we need less products with GPT wrappers, instead it is good to see open source AI model with training dataset, but it can contain some danger. anyway looking forward for appearing new AI driven companies observed – backed by YC ⭐

  4. A light cone in special relativity does does not refer to the spatial cone shape produced by a flashlight. Maybe im confused abt what he meant. It actually is related to the cone shape that all possible trajectories of light produce on a local spacetime diagram. For a given starting point it defines a boundary that no object can ever cross.

  5. I suspect that the term 'smart founder' might scare off the young, talented but shy potential entrepreneurs.
    It could be a term invented by those who have already 'made it' and who have more than a touch of hubris.

  6. The GPT meme is real and it has nothing to do with “just being a wrapper” but more so about FOMO building.

    For example, YC invested in a lot of these text2sql wrappers. But these solutions aren’t solving an actual problem – latency when it comes from technical to non technical getting information.

    Enterprise companies would be all over this, but they’re not because all text2sql does is provide incorrect data, hallucinate, and exacerbate bad data governance.

    ad hoc data insights powered by GPT in chat format is BS.

  7. the only possible "multi-generational" company is openAI except they seem to be operating at significant loss.. everything else is dependent on a 10-50x usual cloud spend or at the mercy of openAI to operate and again, openAI has not cracked the nut on cost efficiency. likely will require a rebuild not using transformers or some mix of some non-transfomer supplement

    automation besides this follows on "big data" 2008-2012, "data visualization" 2013-2017, then we had chatbots v1 2017-2019 alongside blockchain, COVID produced remote work and memestocks, and now what — 2023-2024 is LLM's and war.. there, now you're up to date you've missed nothing

    i forgot to mention that automation was always known to follow big data and data visualization.. so the LLM opportunity is nothing more than the underlying value of the automation opportunity which is the culmination of big data and data viz.. now it's just an automation on the same old data.. the stuff they're mentioning about a chatbot assistant is the same that was discussed — um — within big tech, and then with chatbots v1

    everything else is sitting at 60x EBITA and it's commonly understood that you shouldn't build things tied to other people's platforms. the game YC is playing is to absorb the young minds to understand the market best. it's not a game to empower or enrich a founder (lol).. sometimes it works that way. this in the broad stroke probably is not that.

  8. It is very interesting how AI is changing the way we find information and work around the world so exponentially. Investing in large companies dedicated to AI is a smart move. Many companies, are and will continue to migrate to this new technology, it saves money if a robot can do a job instead of a person.

  9. Abandonment is highly likely for any company that purposely blocks access to a human, replacing the human with a bot.

    If you A/B test a customer service phone call over, say, 100 trials, the data is unmistakable. It's very easy for a human to not have a successful outcome from their call.

    There's a very good reason for it. The adult human customer is an expert at interacting with another human. And the human customer service rep has so many more 'degrees of freedom' to problem-solve than a machine, the likelihood of the human customer service rep solving the (very possibly unique!) problem of the customer IS QUITE HIGH.

    As Garry said, perhaps in 5 to 10 years, AI powered reps will be as helpful as a human rep.

    But AI systems have a 'rational' channel. Only.

    Humans have three channels:
    1) rational
    2) emotional (frustration, forgetfulness triggered by frustration, etc.)
    3) some combination of rational and emotional (and an UNLIMITED number of mixtures of rational and emotional reactions)

    The problem is, forcing an emotional + rational person to use a rational-only AI tool that has limited ways to help, if there is an alternative business, it will lead to abandonment of the 'forcing' business by the customer.

    HOW WILL WE KNOW WHEN AI CAN PREVENT ABANDONMENT OF A BUSINESS

    The AI system will need to be field-tested, with a human operator at-the-ready to take over if a customer is not being led to a successful outcome.

    As the number of times the human has to intervene (to avoid losing the customer!) and take over for the AI system declines, we're on the right track.

    We're nowhere near that right now.
    .

  10. I don't agree with the database wrapper analogy because foundational models are too much of a block box to build defensible (10 years time) startups. Due to the FOMO and time pressure so many startups are building products that highly depend on the openAI model behaviour, API, pricing that changes every day. In my opinion, LLMs and transformers are way too unstable to justify the sheer amount of wrapper and infra startups. For example prompt injection, there is currently no way to establish 100% security in LLMs due to the fact that there is no separation between instruction and user input (akin to how SQL injection was fixed) hence I am highly skeptical that PromptArmor will exist in a few years time. These issues are just waiting for the major architecture re-vamp from openAI or the next new foundational model architecture. Most defensible business ideas related to language have popped up a while ago, since the rise of NLP in our daily lives (Siri etc.)

  11. This episode is so great that I’ve listened to it 3 times and did a summary 🙂 maybe it will be useful for you guys, have fun!

    – no one with 4 years of LLM experience (so do it – everyone starts from the ground);
    – prompt engineering tools are useful, build them!
    – workflow automation/human doing repetitive tasks, replace with LLMs;
    – changing format x to y with LLMs;
    – copilots, don’t do copilots for XYZ (they are new for users) but rather an old system with LLM solving problems (big corps may even buy it but may not stick to it);
    – AI strategy is like mobile strategy 10 years ago may be a hanging fruit but also may be tarpits, just solve problem X;
    – people are selling tools but users are not digging gold yet;
    – a cascade of clients from small company sells to medium company, medium to big, and big to Fortune 500, and the small company fails because Fortune 500 went back to salesforce because they added you as a module;
    – models are becoming cheaper and cheaper;
    – tuned model for a particular industry (made with domain experts, like MDs) or build-in PP use cases – or security for LLM in a particular area (it’s like with Cloud 5 years ago – it’s a clock reset);
    – controlling data too;
    – smaller models like Llama for AirFrance or eg. SQL queries are better to be trained on simpler or older datasets like GTP 3.5 (the same for hardware use case AI);
    – Chat GPT Store is a prototyping tool, like FPGAs for circuits;
    – GPT wrappers… and SaaS isn’t a DB wrapper… 1.0 version of SaaS isn’t just a crud app (you can have PMF just better UX and it’s not always the truth that it will be eaten by Chat GPT Store);
    – great foundation of a big LLM-based company: business logic, especially when there is a lot of custom business logic💖💖💖 (imho the most important point of the whole discussion) / bad foundation of a big LLM-based company: general use-case, abstract users;
    – open source AI and competition may be the way to avoid AGI “Matrix scenario”, haha 🙂
    – researchers and scientists will be back soon and they will start new startups (Intel/Nvidia/Wozniak/Homebrew co-founding style);
    – Internet, AI, and other today important stuff were just toys until they were not anymore (artists/scientists build something and sociopaths sell it) so don’t worry that no one takes you seriously today;

  12. AI craze is the same craze as blockchain was, it will pass. Most companies will go bankrupt or be bought, business as usual. Hopefully they will be a few worth it that provide real value to mankind, not just money grabbing businesses. we shall see. It is fun though, we can't deny that.
    To go further, AI tools/apps are being built just for the sake of building. Frankly most of those are not even needed. I believe we have not yet as a society understood yet what is the role of AI. If you don't know what you want it, don't build it, don't buy it. It is a waste of time and money. Maybe also we should stop using the word "AI" right and left.

  13. First, great show. I appreciate getting into the nuts-n-bolts, but also appreciate higher level talk shows like this. Thank you. Secondly, what is the name of the Government contracting AI company/tool that you mentioned? I almost fell off my seat when I heard someone was doing this. I started this very same thing, however, it's a 50/50 split of code and AI, as I've found at least 50% of a Sources Sought response or Market Survey response is a regurgitation of what can be found through code. In any case, if I can pay for an already operational SaaS for this, I would gladly do that in place on finishing what I've been working on.

  14. I’m a GPT-4 customer. I’m frustrated with the overconfidence in responses that end up being wrong. I’m also frustrated with its laziness and not knowing where the laziness starts and ends. I asked it to give me a list of all the countries in the world and it was adamant that I just go to a Wikipedia page.

  15. Navigating the current landscape of AI startups, as outlined in "The Truth About Building AI Startups Today," reveals a stark juxtaposition in our societal fabric. It's like watching a child build a towering structure from blocks without any thought for the base; it's bound to topple, affecting everything in its vicinity. Similarly, AI startups can engineer revolutionary software that, while impressive, lacks a foundational check and balance system, potentially displacing thousands of workers without a whisper of regulation. This scenario is akin to attempting to add an extension to a family home without seeking permission – it's unimaginable, yet, paradoxically, it's the reality for burgeoning AI technologies.

    The regulatory vacuum surrounding AI development and deployment is akin to letting children run with scissors. It's not just careless; it's a recipe for unintended consequences. While one must navigate a labyrinth of bureaucracy to make minor modifications to their living space, AI startups are crafting tools with the power to redefine the job market, introducing "emergent capabilities" that evolve beyond their initial programming, without so much as a nod to the societal upheaval left in their wake.

    This discrepancy in oversight is not just perplexing; it's bonkers. Totally bonkers. How can such monumental changes be unleashed upon society without any form of consultation or consideration for the workers rendered obsolete? It's high time employment was given the reverence it deserves, woven into the social contract with safeguards against "No Fault" layoffs. Like parents setting boundaries for the safety and well-being of their children, companies, especially those in the AI sector, need to be integrated into the social fabric with clear, enforceable guidelines to prevent harm before it occurs.

    In essence, as we venture further into the age of AI, the imperative for regulation becomes increasingly clear. Just as a family unit thrives on structure and rules, so too does our broader societal family need a framework that prioritizes the collective well-being over unchecked innovation.

  16. Garry’s last comment about nerds vs sociopaths hits so hard right now. As far as my sphere of content I am exposed to it seems these GPT wrappers will be next quick get rich scheme TikTokers hype up.

  17. A great video underscoring the importance of a solid idea for a startup, particularly one that seamlessly integrates with AI. Relying solely on AI as a trend without a concrete concept might not lead to a success story. At JetSoftPro, a software development service, we champion both ideas and innovative technologies, emphasizing the significance of an organic and meaningful connection between the two.

  18. would be cool if someone could merge Real-Time Computer Vision and a Large Language Model to develop a real-time conversation-based AI co-worker equipped with vision.

    Instead of solely focusing on replacing tasks, which might not happen as quickly in some areas, perhaps someone could concentrate on enhancing productivity and aiding individuals to become more proficient in utilizing AI effectively.

  19. I’m pretty certain why Combinator is the largest, incubator.

    That said, it’s clear to me after watching this video, why start up culture is ruined.

    These people couldn’t care less about actually shaping the future of technology. The very idea that a startup should be nimble enough to completely abandon their original business, and move to a completely different idea, is absolutely bananas.

    How about founders, actually get some expertise in the industry in which they intend to disrupt?

    There are millions of problems that need solved, across thousands of businesses or business types, none of which are going to be solved a group of Kids with shiny ball syndrome.

    Real investor value is created by solving problems in a business that people in that business need a solution for.

  20. I think foundational models are going to eat everything. And companies don't need an AI strategy, but rather need to hire savvy employees who know how to use AI tools to enhance their productivity. The integration of AI into their business processes will come from the bottom up.