En Vivo Santiago 2025.03.14 (Movistar Arena)
Etiqueta: feditestsss
Video
Esto es un video.
Imagen asdfasdfasdf

Texto de la imagen. Editado 2.
Este este un post con formato status con 3 imagenes.




Otra imagen de prueba 1234

Solo otra imagen de prueba más.
Nueva Prueba 9832
¿Qué nos hace engordar? La pregunta es sencilla, pero la respuesta no lo es. Podríamos simplemente decir que es una cuestión matemática, calorías que entran por calorías que se queman, pero sería simplificar demasiado el asunto, olvidando que cada cuerpo metaboliza los alimentos de forma distinta y que distintos alimentos y combinaciones de alimentos pueden llevar a resultados ligeramente distintos.
Una simple proteína. La deficiencia de una sencilla proteína, CD44, evitó en un experimento que un grupo de ratones ganara peso aun siendo sometido a una dieta alta en grasas. El equipo responsable del experimento trataba de indagar en el papel que desempeña esta proteína en la salud metabólica y en la obesidad, un papel, según los resultados, de gran importancia.
“Previamente habíamos señalado que la deficiencia de CD44 suprimía la neuroinflamación. Dado el papel crítico que la inflamación desempeña en la progresión de la obesidad y las complicaciones relacionadas, incluyendo la hiperglucemia y la resistencia a la insulina, propusimos que CD44 puede tener un rol significativo en estos procesos. Por consiguiente, investigamos el lazo potencial entre CD44 y los trastornos metabólicos”, explicaba en una nota de prensa Cheng Sun, coautor del estudio.
CD44. Pero, ¿qué es exactamente esta proteína y qué sabemos de su función en el cuerpo? CD44 es una proteína transmembrana, una proteína ubicada en la membrana cellunar que atraviesa esta capa que cubre la célula. Según explicaba el propio equipo, esta proteína “desempeña un papel esencial al trascender estímulos extracelulares dentro de las cascadas de señales intracelulares”, contribuye así a la regulación metabólica. Esta proteína es de especial importancia, además, para las células cancerosas.
Otro mas ww (editado)
Tesla has proposed a massive new $1 trillion compensation package for its CEO Elon Musk, and many of the benchmarks he needs to hit are simply watered-down versions of promises he’s spent years making about the company.
That’s not the picture Tesla’s board of directors paints in the company’s annual proxy statement, where they revealed the proposed pay package. Instead, the board focuses on how it plans to create “the most valuable company in history.”
To be sure, if Tesla accomplishes all that it aims for with this deal, it will look like a much different company at the end of the 10-year period it covers. That doesn’t change the fact that the milestones the company is asking Musk to aim for are less ambitious than his own previously stated goals.
While the unprecedented pay package still needs to be approved by shareholders at a meeting in November, it’s easy to see the company’s fervent fan base voting “yes.” Previous votes on Musk’s compensation have been overwhelmingly approved by Tesla’s shareholders.
With that in mind, let’s take a look at what Musk needs to accomplish in order to receive the full payout.
Musk spent years claiming Tesla would be able to make 20 million electric vehicles per year by 2030. This was back when he and his company were still promising to grow at a rate of 50% each year.
But Tesla walked away from those promises as sales growth stalled, and then reversed in 2024. The company then pulled the 20-million-per-year goal from its impact report last year and stopped building a planned factory in Mexico that would have increased production.
Otra Entrada de Prueba x+1
There’s been great interest in what Mira Murati’s Thinking Machines Lab is building with its $2 billion in seed funding and the all-star team of former OpenAI researchers who have joined the lab. In a blog post published on Wednesday, Murati’s research lab gave the world its first look into one of its projects: creating AI models with reproducible responses.
The research blog post, titled “Defeating Nondeterminism in LLM Inference,” tries to unpack the root cause of what introduces randomness in AI model responses. For example, ask ChatGPT the same question a few times over, and you’re likely to get a wide range of answers. This has largely been accepted in the AI community as a fact — today’s AI models are considered to be non-deterministic systems— but Thinking Machines Lab sees this as a solvable problem.
The post, authored by Thinking Machines Lab researcher Horace He, argues that the root cause of AI models’ randomness is the way GPU kernels — the small programs that run inside of Nvidia’s computer chips — are stitched together in inference processing (everything that happens after you press enter in ChatGPT). He suggests that by carefully controlling this layer of orchestration, it’s possible to make AI models more deterministic.
Seguir leyendo