• 0 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle
  • Star Trek is a great example of what I’m talking about actually. How many legitimate scientists do you think are out there right now who either had their interest in science first sparked by or at least significantly influenced from watching some version of Star Trek? I would bet it is a lot of them. Not every concept in Star Trek is worth diving into from a scientific perspective but not trying to do that at all would be a huge mistake.

    Now, Graham Hancock isn’t writing Star Trek but people listen to what he’s saying for the same basic reasons they watch Star Trek. They are curious about a science based approach to the world. They don’t know he’s exaggerating some things and taking other things out of context. Use the opportunity to teach them.

    In other words, don’t call them idiots for watching Star Trek, start a conversation about space travel.


  • You’re ignoring the interesting questions he asks in favor of the easy to hand wave away stuff and that’s exactly what I’m talking about. To be clear, I’m not defending the things he says. I’m pointing out that his more outlandish theories gain more traction because the scientific community doesn’t lean into the softballs and use them as an opportunity to both teach people actual science and understand what different groups of people want to learn about.

    Ignore the star / soul example and focus in on the possibility of an ancient and semi advanced civilization existing. That’s the part grabbing people’s attention. Talk about what that would change about our understanding of the past and what sort of evidence we would expect to find if it were true. Showcase people working in related fields and what they have found already. Propose other locations we could look for that evidence and discuss other topics we could study while looking for that evidence in those places. Engage the curiosity, don’t dismiss it. Anyone listening to Graham is likely uneducated in science but interested in it so use that as your jumping off point instead of judging those people for not being farther down the path.


  • I don’t see how getting more people interested in ancient history and geology is a bad thing. Part of the reason Graham has the wiggle room to make the claims that he makes is that the subject is relatively unstudied.

    Obviously there is actual science taking place in the field and has been forever but funding for that kind of thing is notoriously difficult to come by compared to many other fields. Getting grants to study the distant past for essentially no reason other than curiosity is not a priority within an economic system that prioritizes profit over all else. The best way to break through that particular obstacle is getting more people to pay attention and ask questions. If we need a benign conspiracy theory about “big geology” hiding the truth from us to make that happen then where’s the harm in that? The vast majority of people prone to conspiratorial thinking are already farther down that rabbit hole than Hancock’s ideas will take them.

    Additionally, actual scientists would do well to learn something from Graham about presentation. Despite what you may think of him, the way he talks about the subject resonates with people. People don’t want hear a regurgitation of facts in a research paper. Speculate a bit and get people excited about your future work. You don’t need to go to the extremes that he does but don’t refuse to branch out from what can be conclusively proven today either. Talk about your theories and what you’re hoping to find / learn just as much as you talk about the results of your research.
















  • Technically their interpretation is correct, laws should be explicitly enumerated by the legislative branch which has the most accountability to voters. We’re all just so desensitized to the fact that Congress is a wasteland of gridlock and special interest money that we don’t even expect them to do the job of creating laws anymore. If we had a working legislative branch this would be a gentle reminder for them to be thorough and use detailed language when crafting legislation, instead it’s a depressing reminder that our government quite literally cannot function without outsourcing the central function of one of the main branches.


  • Current gen AI is pretty mediocre. It’s not much more than the bastard child of a search engine and every voice assistant that has been around for the last ten years. It has the potential to be a stepping stone to fantastic future tech, but that’s been true of tons of different technologies for basically as long as we’ve been inventing things.

    AI is not good enough to replace the majority of workers yet. It summarizes information pretty well and can be helpful with drafting any sort of document, but so was Clippy. When it doesn’t know something it can lie confidently. Lie isn’t really the right word but I’ll come back to that concept in a second. Incorrect information is frustrating in most cases but it can be deadly when presented by a source that is viewed as trustworthy, and what could be more trustworthy than an AI with access to the collective knowledge of mankind? Well, unfortunately for us AI as we know it isn’t really intelligent and the databases they’re trained on also contain the collective stupidity of mankind.

    That brings us back to the concept of lying and what I view as the fundamental flaw of current AI; namely that any sort of data interpretation can only be as good as the data it describes. ChatGPT isn’t lying to you when it says you can put glue on your cheese pizza, it’s just pointing out that someone who said that got a lot of attention. Unfortunately it leaves out all the context which could have told you that pizza would not be fit to consume and presents the fact that it was a popular answer as if that is the only thing that defines the best answer. There’s so much more that needs to be taken into account, so much unconscious human experience being drawn from when an actual human looks at something and tries to categorize or describe it. All of that necessary context is really difficult to impart to a computer and right now we’re not very good at that essential piece of the puzzle.

    If we could assume that all datasets analyzed by AI were free from human error, AI would be taking over the world right now. However, that’s not the world we live in. All data has errors. Some are easy to spot but many are not. AI firms are getting companies to salivate at the idea of easy manipulation of data in one form or another. They aren’t worried about the errors in the data because they view that as someone else’s problem and the companies all think their data is good enough that it won’t be an issue. Both are wrong. That’s exactly why you hear a lot of talk about AI right now and not all that much practical application beyond replacing customer service reps, especially in the business world. Companies are finding out that years of bad practices have left them with a dataset full of errors. Can they find a way to get AI to correct those errors? In some cases yes, in others no. In either case the missing piece preventing a full scale AI takeover is all that human background context necessary for relevant data interpretation. If we find a way to teach that to an AI then the world is going to look vastly different than it does today, but we’re not there yet.