The collapse of HUMAN knowledge is now nearly complete
|
Time
Text
So I've conducted an assessment of the quality of human knowledge based on the year in which it was published.
And this is because, of course, I'm assessing a lot of human knowledge, a big portion of the entirety of human knowledge in many different languages, as part of the data pipeline analysis for our AI model training, you know, for Enoch, the free AI model you can use at Brighton.ai.
So here's what I've come to discover, and this is uh probably a little bit disturbing.
It looks like the peak of human cognition in Western civilization, that is the peak of human reading and writing skills, that is uh the ability to elucidate your thoughts in a written form.
Uh that peak happened in about the late 1970s or early or maybe to mid-1980s, even you could stretch it into the early 1990s.
So let's let's just call it about a let's say a 20-year period from 1975 to 1995.
That was peak intelligence for Western civilization.
And after that point, uh human intelligence in the West, anyway, began to substantially decline.
And we see that reflected in the decline in quality of writing, the move away from sophisticated vocabulary into simplified vocabulary.
We see a shift from uh high school level reading to sort of seventh grade level, and then today it's like what we would call fifth grade level writing in newspapers and in magazines, etc.
Although there are exceptions, obviously, there are there are higher-end magazines that cater to let's say a more intelligent audience, you could say scientific American, for example.
But even Scientific American has been sort of cheapened cognitively, and today it reads a lot closer to a magazine like popular science compared to the way Scientific American sounded a couple of decades ago.
So there's been a shift across the board.
And newspapers are a key indicator here, and I'm really fascinated by looking at newspapers from the 1950s, 1960s and 70s, etc., compared to newspapers today.
There's no comparison.
It's clear that the average American newspaper consumer in the 1960s or even the 1950s was far more intelligent than the average American newspaper consumer today in 2025.
No comparison.
I mean, in 50 years, let's say, from 1975 to 2025, the cognitive capabilities or reading comprehension capabilities of Americans have uh substantially plummeted.
Now, I am tying this to the rise of certain types of technology.
We really see the end of the peak of Western cognition in about the mid-1990s, like I said, from 1975 to 1995.
And in the mid-1990s, what were we witnessing in society?
We saw the rise of the internet.
Now, the internet can be a wonderful tool for research and for knowledge, for learning, reading and writing, etc.
But it also tended to present information in a shorter form and a more simplified form.
And so as the 1990s progressed, reading comprehension actually declined in Western civilization.
So the rise of the internet, more information available, but lower quality information overall, which led to a decline in reading comprehension or reading level capabilities.
And then if we fast forward to the early 2000s, we see even more cognitive decline because then we have the rise of social media.
Social media really began to take off in, let's say, 2005 or or thereabouts.
And social media was centered around even shorter forms of content, and that's where we began to see the rise of emojis or emoticons.
So symbolic communication, little pictographs instead of words.
But the words themselves were also very, very short.
For example, uh, Twitter, in its first rendition, only allowed you to post, I believe, 143 characters.
That was it.
It was one sentence and kind of a short sentence at that.
And you saw uh Facebook and MySpace.
As these were rising up, human cognition was actually falling down.
And the more social media spread, the dumber people got, which is not surprising if you visit a lot of social media like the mainstream platforms these days, even to X. It's you know, it's not the place where the brightest people express themselves, it seems, although there are exceptions.
So, fast forward then to about 2015.
In 2015, or even 2014, we saw extreme censorship of human knowledge by the tech giants.
So that's when Google began to really censor anybody who was questioning big pharma.
And in 2017, Google ran a big update.
They rolled it out, it was called the medic update, and it obliterated all information about holistic medicine, alternative medicine, etc.
It pushed everybody into pharmaceutical medicine and traditional, or I should say uh conventional Western medicine.
So it just moved people away from anything that was natural, alternative, or complementary.
And that was all by design.
So as a result, then the tech platforms, including YouTube, Google, Facebook, began to focus on instead of spreading information, their new mission was to isolate people from knowledge.
So they became literally anti-knowledge platforms.
And that's what Google is today in particular.
It's an anti-knowledge platform that is designed to isolate you from knowledge that could empower you.
Knowledge about nutrition, health, longevity, you know, self-reliance, sustainability, financial independence, and all kinds of similar topics.
So that was a major shift.
And of course, because of that, then people were dumbed down even further.
And you saw this reflected in the newspapers and in the magazines, and I can show you many examples, but you look at a something like a USA Today newspaper from a few years ago, and you could see it was really getting dumbed down.
It was looking more and more like a comic book.
It was uh heavily saturated with photos and graphics, but not very much text.
I mean, nothing like a newspaper from 1957, for example, just an average newspaper from 1957 today looks like you know, college graduate level reading, you know, uh if not sort of master's degree level reading.
All right, now, fast forward to 2023, let's say, the rise of AI.
And of course, AI has continued to uh spread in terms of its usage since then.
A lot of people using chat GBT, a lot of college students are using chat GPT to write their papers, people are using it and other engines to write their letters and write their emails and also to do thinking for them.
So now we have reasoning models that can step through the process of thinking or reasoning, and this has allowed lazy humans to offload uh thinking skills to machines.
So as a result, now we have a lot more people who are uh failing to learn how to write.
They are not typically sophisticated readers, and they are using machines as surrogates for their thinking, so they're not practicing their thinking skills.
So as a result, humans are becoming even dumber still, which is really saying something because they had already achieved quite an astonishing level of being dumbed down, you know, before the rise of AI.
Now, there's another factor in all of this that has to be mentioned, and that was the rise of wokism.
And I think the rise of wokism really took off during the Obama years in America.
And of course, wokeism is centered around these delusional ideas of thinking, for example, that a man can become a woman, or that a person's gender can be changed instantly in their own mind simply by making a wish.
And uh wokeism was really a form of cultural retardation.
And it was rooted in these ideas that made no sense whatsoever.
For example, supposing that all white people were bad because of the lack of melanin in their skin, or that all people of color should be boosted with extra bonus points when grading them in order to achieve academic equity.
These ideas, again, the centerpieces of wokeism, were all rooted in uh delusion.
And as a result of this wokism, uh, in my assessment as someone who builds AI models, you really can't use any information from the world after about,
oh, I don't know, 2010, because especially you can't use it if you're training reasoning models because so much of that information was irrational and it would break a reasoning model.
And this was also true with climate change.
So the whole climate change narrative is completely fabricated, claiming that carbon dioxide is a bad molecule, that's bad for the planet, and you know, somehow that plants don't need CO2 or that photosynthesis doesn't need CO2.
You know, these are crazy insane ideas.
But that's what was pushed, and it was pushed across online conversations, it was heavily enforced.
Uh it was pushed through science journals, which all got funded by governments in order to reach bizarre conclusions about climate.
You know, half the funding went to climate projects, it seems.
And all the scientists knew that if they wanted to continue to receive funding, they had to produce whatever science the government wanted, which was quote science that would confirm that climate change is a horrible risk, even though it was all complete nonsense.
But as a result, you really can't use any information from that era.
And if if you want to train language models or reasoning models to be really smart, you either have to focus on the era of peak cognition in the West, which is again 1975 to 1995, or you have to go outside the United States and you have to use content in other languages, which is something that we have done.
For example, in Chinese, we don't see the same insane rise of wokeism.
And we also don't see that in Russian.
But we do see it in German, in French, in Italian, you know, the European languages, plus Spanish, we see wokeism, which is a contaminant in the history of human knowledge.
So when you're training models, you have to be very careful about what era of content and what language or the country of origin of the knowledge that you're actually choosing for training purposes.
And that's why when it comes to nutrition and phytochemistry, we have used a lot of science papers that came out of China that were originally, of course, published in uh Chinese, and then we translated those into English and then used those papers to train the model on additional uh scientific information about botany and and uh uh plant medicine, nutrition, etc.
Because China turns out to be the best source for that information.
Whereas in the West, most of the grant money was going to proving climate change, which was a complete waste of time and money.
And so the conclusion here is that this explains why Chat GPT is retarded.
This explains why Grok is retarded.
It's because these mainstream models are trained on what's called the common crawl of the internet.
Common crawl, the number one source of common crawl is Reddit, which is like a retard hub of disinformation.
And they scrape Reddit to get the common crawl, and they scrape Wikipedia, which is also controlled by the CIA, and you know, every narrative is controlled on Wikipedia to push false narratives.
So the top two sources for AI engines, being Reddit and Wikipedia, are both rooted in delusions and falsehoods.
And so that's why the models suck, by the way.
And so what we have done at Brighton.ai is we've taken open source base models, all of which were trained on common crawl data, but then we have altered them substantially.
We've found very clever ways to alter them through a number of means, not just fine-tuning, but even beyond uh fine-tuning.
We I don't know what you call it, uh mind wiping, uh, major tuning, I don't know.
But we've found ways to drastically alter the vector database that is it describes the the neurological connections between tokens or concepts inside an AI engine, you could say.
And as a result, our engines are now uh they represent reality and they are much smarter than OpenAI or Grok or Microsoft or Google's engines, much smarter by far, on reality-based topics.
So our engine at Brighton.ai, even though it's not yet a reasoning engine, uh, we will release a reasoning engine at some point, but it's just a standard LLM that provides outstanding answers on the things that matter in reality.
That is on uh foods and nutrition and health and medicine and home gardening and survival and off-grid living and how to make your own medicinal herbs, for example, how to repair things, how to survive and prepare yourself for hard times, all about finance and money and economics and history and the things that matter.
We did not train it on high-level math problems, and so chat GPT excels at high-level math.
Now, a lot of the benchmarks that are used by the AI industry to rate their engines are based on high-level math because the people who build AI are high-level math geeks.
So they like to test AI engines on their high-level math that nobody else uses in the real world.
So our engine will not beat uh chat GPT on mathematics, but it beats ChatGPT on everything surrounding common sense, you know, from you know, gender and preparedness and nutrition and preventing chronic degenerative diseases and boosting your health and enhancing longevity, etc.
Because those are the things that matter in the real world to real people.
And so our engine will bring you a level of intelligence that used to be common in the West in the 1980s or 1970s, but a level of intelligence that has since vanished for the most part.
So if you find that our model is uh speaking to you in uh a little bit higher education sounding language, uh that's why.
And that's on purpose.
We did not build it to be a chat bot, it's not designed to be your friend, it's not designed to have a relationship with you, it's got no such training, it's designed to be a research engine and to help you uh write articles or to expand bullet points or to summarize science papers,
or more importantly, to find deep, deep answers to your questions about health and nutrition and all the topics that we cover, including finance and economics and history, etc.
It also Covers a lot about physics and chemistry, and it can translate into multiple different languages as well.
So it does a lot of different things.
So you can use it, it's free.
It's at Brighton.ai.
And the reason it's free is because, well, we're we're paying for it.
And we kept it uh non-commercial, nonprofit, on purpose.
We don't want it to be commercial.
We don't want to make money off of it.
We want to spread human knowledge.
That's my mission is to spread human knowledge.
And the engine is not only free, but it's also going to keep getting better because we are in the process of analyzing literally hundreds of terabytes of additional human knowledge from across the world, uh, spanning all modern time.
Even knowledge that's hundreds of years old, but it's recorded in books, especially in ancient cultures like Chinese culture, for example.
We are processing that information and finding out the real nuggets of intelligence.
It's kind of like mining for treasure, knowledge treasure, that's what we do.
And instead of throwing a bunch of garbage into an AI engine and calling it, you know, grok or whatever.
We are mining for really high-end human knowledge and then using that to train our models.
And so our models are only going to get better and better with each passing month.
So thank you for supporting us.
Thank you for using our tools.
And if you want to help us, the simplest way is to just tell people about our engine.
Tell people to use it.
It's at Brighton.ai.
That's B-R-I-G-H-T-E-O-N.ai.
The current engine is called Enoch, although there'll be other names for upcoming engines.
And uh have fun using it and understand that it's gonna get better.
And understand that currently it's about 19 out of 20 times, it gives you a really great answer.
One out of twenty times it gives you a pretty horrible answer.
Uh if that happens, just rerun the same prompt.
Because it will almost always correct it with the second prompt.
So have fun using it.
You can research food and recipes and food ingredients, you can research pharmaceuticals, you can research herbs, you know, everything related to health.
Literally everything that's ever been known by humanity related to health is probably in the engine right now with a lot more knowledge yet to come.
So I'm Mike Adams, the founder of Brighton and the creator of the Enoch engine, and thank you for your support.
Thank you for spreading the word and use it to empower humanity with knowledge, bypass censorship, and uplift your game.
All right, thanks for listening.
Take care.
Magnesium glycinate capsules.
Highly bioavailable, non GMO, lab verified, and encourages mental calm and physical wellness.