• Pavitr Prabhakar is Spider-Man India, featured in Across the Spider-Verse
  • Nilesh Chanda is a fanfic version of Vinod Chanda from Pantheon AMC, featured in The Kalkiyana

Check out my blog: https://writ.ee/pavnilschanda/

  • 60 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle

  • First of all, I agree with many of the commenters that you should ask a professional for help. There could be some free sources in your area, but we can’t help you further without knowing additional details. Many professionals do pro bono.

    I also noticed your interest in AI companions given a previous thread you made, which can be a sensitive topic. I want to emphasize that AI companions should be approached with caution, especially for individuals who may be vulnerable like yourself. However, if you’re genuinely interested in exploring this, you could consider programming an AI companion with the goal of helping you achieve happiness. Through interactions with the AI, you may gain a deeper understanding of yourself and your needs. I advise against proprietary AI apps since they will prey on your vulnerability, not to mention that you may not have the money to keep subscribing in the first place. I would also suggest that you use an AI companion in conjunction with therapy sessions. Use your therapist’s guidance to inform your interactions with the AI, which can help you gradually open up to new opportunities.


  • That happens on Reddit too. Maybe you should consider this lemmy instance’s general stance on AI. Given this community’s solid stance on being anti-AI, you’re not completely wrong with your hunch that the ones who downvote may be said boosters and doomers. And the lack of comments might be due to the community’s zero-tolerance policy for pro-AI sentiments, so they may be hesitant to comment here. I appreciate you creating this space though given how we should be aware of the pitfalls of AI, and keep up the good work. If you’re looking for more discussion and supporters, I don’t know about other lemmy instances, but I know that Reddit itself tends to take an anti-AI stance, so you can also gather supporters there (if you haven’t already).


























  • Content:

    The artificial intelligence models behind popular chatbots developing faster than Moore’s law, a measure of how quickly computer hardware performance increases. That suggests the developers of AI systems, known as large language models (LLMs), are becoming smarter at doing more with less.

    “There are basically two ways your performance might improve,” says Tamay Besiroglu at the Massachusetts Institute of Technology. One is to scale up the size of an LLM, which, in turn, requires a commensurate increase in computing power. But due to the generative AI revolution, there are global supply shortages in the graphics processing unit computer chips used to power LLMs, creating a bottleneck in AI development.

    The alternative, says Besiroglu, is to improve the underlying algorithms to make better use of the same computing hardware.

    This seems to be the approach favoured by the current crop of LLM developers, to great success. Besiroglu and his colleagues analysed the performance of 231 LLMs developed between 2012 and 2023 and found that, on average, the computing power required for subsequent versions of an LLM to hit a given benchmark halved every eight months. That is far faster than Moore’s law, a computing rule of thumb coined in 1965 that suggests the number of transistors on a chip, a measure of performance, doubles every 18 to 24 months.

    While Besiroglu believes that this increase in LLM performance is partly due to more efficient software coding, the researchers were unable to pinpoint precisely how those efficiencies were gained – in part because AI algorithms are often impenetrable black boxes. He also points out that hardware improvements still play a big role in increased performance.

    Nevertheless, the disparity in the pace of development is an indication of how well LLM developers are making use of the resources available to them. “We should not discount human ingenuity here,” says Anima Anandkumar at the California Institute of Technology. While more powerful hardware or ever larger training datasets have driven AI progress for the past decade, that is starting to change. “We are seeing limits to the scale, both with data and compute,” she says. “The future will be algorithmic gains.”

    But Besiroglu says it might not be possible to endlessly optimise algorithms for performance. “It’s much less clear whether this is going to occur for a very long period of time,” he says.

    Whatever happens, there are concerns that making models more efficient could paradoxically increase the energy used by the AI sector. “Focusing on energy efficiency of AI alone tends to overlook the broader rebound effects in terms of usage,” says Sasha Luccioni at AI firm Hugging Face. “This has been observed in other domains, from transportation to energy,” she says. “It’s good to keep this in mind when considering the environmental impacts of compute and AI algorithms.”