The Future of Everything Is Lies, I Guess: New Jobs - Comments

The Future of Everything Is Lies, I Guess: New Jobs

elcapitan

Is there a way back to the pre Jeffrey Dahmer world, where human beings are human beings and not "meat"?

sp527

Sure. Would you like WWII, medieval-era Christianity, or Khanate Asia?

cwmma

What are you talking about, the only use of meat is in "Meat Shield", a phrase that's been around a long time now.

jjulius

In the article, Ctrl+F for "meat" returns 3 results, while "human" returns 8. Seems like "human" remains the dominant word of choice in this author's vernacular.

Edit: Further, the only times "meat" appears is in the phrase "meat shield", which is an analogy that is very apt relative to the crux of the article.

Edit 2: "People" appears 13 times!

sebg
fHr

Meat pupeteering, has nothing to do with Jeffrey just state of slowly getting pushed into doing 95% of devwork with Agents.

ai_critic

"meatshield" has the correct connotations for that sort of work.

crote

That is exactly why the term is being used.

A company like Amazon doesn't treat its warehouse workers as human beings. Workers are seen as disposable: forced to piss in bottles, forced to work around the corpses of their collapsed coworkers, paid the absolute minimum possible, and replaced the second they don't operate like a perfect unfailing machine. You aren't viewed like a human, you are a tool. Cattle. A piece of meat they are forced to retain because a robot isn't quite capable of doing your task yet.

The article's use of "meat shields" isn't any different. Humans are going to be hired for the sole reason of taking accountability for actions dictated by AI. They are there only because the company can't put blame on a machine and will be sued to oblivion if there's nobody to blame at all. Your existence as a person is irrelevant, they are just interested in someone with a heartbeat they can blame when stuff inevitably goes wrong.

pmg102
gordonhart

Why post an archive link for a static site with no ads or subscriptionware?

dsmurrell

"Unavailable Due to the UK Online Safety Act" - without my VPN... do you know why?

lbotos

aphyr may have some NSFW photos on the site IIRC which may have got the domain swept up with the new UK laws.

nemomarx

Geo blocking the UK satisfies any age verification, otherwise the site owner would have to check if their content is considered adult in the UK and implement something.

draw_down

[dead]

nonameiguess

This is part 9 of a 10-part series. The author has posted every chapter to Hacker News every day for the past 9 days. Every time four of the first five or so comments are:

Someone noting it is unavailable in the UK.

Someone posting an archive.is link.

Someone asking why the above posted an archive link to a static site.

An answer that it is because the content is otherwise unavailable in the UK.

Do we really need to see this every single time?

I realize I am also not adding to the real discussion now as well, but Jesus Christ, this is irritating. Can we get a new rule that an author posting their own content, knowing it is unavailable in the UK, has to post their own archive link and explain why they're doing so as part of the submission?

tele_ski

https://xkcd.com/1053/

Relax, not everyone sees every article everyday

kreco

I wish we could flag some posts (like as "tangential") instead of this archaic upvote/downvote.

And obviously a way to filter in/out those flags.

jasonmp85

[dead]

pixl97

>Can we get a new rule that an author posting their own content, knowing it is unavailable in the UK, has to post their own archive link and explain why they're doing so as part of the submission?

[Author blocks link to avoid being potentially in violation of the law]

You ask author to willingly provide link to again potentially be in violation of the law

You do not see the irony in your question

ai_critic

I think that this is an interesting attempt at taxonomy, but it's a bit on the magical thinking end (and I say this as somebody that does a good amount of what's described as the incanter role). It's a combination of the author's previous witchy aesthetic (see his excellent "<x>ing the technical interview" series) and progressive labor politics (which are asymptotically doomed in the current automation push).

The biggest failure of imagination, I think, is the assumption we'd use humans for most (or *any) of these jobs--for example, the work of the haruspex is better left to an LLM that can process the myriad of internal states (this is the mechanical interpretation field).

zephyrthenoble

And when the haruspex LLM fails, what do we turn to?

mitthrowaway2

Yes, I had the same impression. I'm sympathetic to the author's perspective but I can't muster even the minimal optimism they've shown here. The "process engineers" as described would themselves quickly be replaced by an automated system. The "statistical engineers", I think, would never be able to keep up with the rate of change of the AI models, which would likely have different statistical behavior and biases in each language/context/etc with each update, and so it's unlikely anyone would pay them to develop that required deep expertise in the first place. More likely, that work would be done at an AI foundation model company -- but it would be done just once, and then incorporated into the training process.

thrance

> and progressive labor politics (which are asymptotically doomed in the current automation push).

What do you mean exactly by this?

jayd16

A magic 8-ball "can process the myriad of internal states" of any questions you throw at it. But we don't use it even tho it can give us answers.

ej88

I am personally of the opinion that ML will end up being 'normal technology', albeit incredibly transformative.

I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'. Jobs that encompass a role that requires accountability will be directing, providing context, and verifying the output of agents, almost like how millions of workers know basic computer skills and Microsoft Office.

In my opinion, how at-risk a job is in the LLM era comes down to:

1: How easy is it to construct RL loops to hillclimb on performance?

2: How easy is it to construct a LLM harness to perform the tasks?

3: How much of the job is a structured set of tasks vs. taking accountability? What's the consequence of a mistake? How much of it comes down to human relationships?

Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.

On Model Trainers -- I'm not so convinced that RLHF puts the professional experts out of work, for a few reasons. Firstly, nearly all human data companies produce data that is somewhat contrived, by definition of having people grade outputs on a contracting platform; plus there's a seemingly unlimited bound on how much data we can harvest in the world. Secondly, as I mentioned before, the bottleneck is both accountability and the ability for the model to find fresh context without error.

netcan

In some sense, technology is "not normal" regardless.

If we think of the digitization tech revolution... the changes it made to the economy are hard to describe well, even now.

In the early days, it was going to turn banks from billion dollar businesses to million dollar ones. Universities would be able to eliminate most of their admin. Accounting and finances would be trivialized. Etc.

Earlier tech revolution s were unpredictable too... But at lest retrospectively they made sense.

It's not that clear what the core activities of our economy even are. It's clear at micro level, but as you zoom out it gets blurry.

Why is accountability needed? It's clearly needed in its context... but it's hard to understand how it aggregates.

xienze

> Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.

Why can't LLMs and agents progress further to do this software engineering job better than an actual software engineer? I've never seen anyone give a satisfactory answer to this. Especially the part about making mistakes. A lot of the defense of LLM shortcomings (i.e., generating crappy code) comes down to "well humans write bad code too." OK? Well, humans make mistakes too. Theoretically, an LLM software engineer will make far fewer than a human. So why should I prefer keeping you in the loop?

It's why I just can't understand the mindset of software engineers who are giddy about the direction things are going. There really is nothing special about your expertise that an LLM can't achieve, theoretically.

We're always so enamored by new and exciting technology that we fail to realize the people in charge are more than happy to completely bury us with it.

Aperocky

As an engineer, I'm never more excited about this job.

My implementation speed and bug fixing my typed code to be the bottleneck - now I just think about an implementation and it then exist - As long as I thought about the structure/input/output/testability and logic flow correctly and made sure I included all that information, it just works, nicely, with tests.

Unix philosophy works well with LLM too - you can have software that does one thing well and only one thing well, that fit in their context and do not lead to haphazard behavior.

Now my day essentially revolves around delivering/improving on delivering concentrated engineering thinking, which in my opinion is the pure part about engineer profession itself. I like it quite a lot.

hombre_fatal

I mostly agree with you.

Though something I half-miss is using my own software as I build it to get a visceral feel for the abstractions so far. I've found that testability is a good enough proxy for "nice to use" since I think "nice to use" tends to mean that a subsystem is decoupled enough to cover unexpected usage patterns, and that's an incidental side-effect of testability.

One concern I have is that it's getting harder to demonstrate ability.

e.g. Github profiles were a good signal though one that nobody cared about unless the hiring person was an engineer who could evaluate it. But now that signal is even more rubbish. Even readmes and blog posts are becoming worse signals since they don't necessarily showcase your own communication skills anymore nor how you think about problems.

rootusrootus

> My implementation speed and bug fixing my typed code to be the bottleneck

I remember those days fondly and often wish I could return to them. These days it's not uncommon to go a couple days without writing a meaningful amount of code. The cost of becoming too senior I suppose.

the_af

> As an engineer, I'm never more excited about this job.

How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?

Everyone thinks it won't be them, it will be others that will be impacted. We all think what we do is somehow unique and cannot be automated away by AI, and that our jobs are safe for the time being.

sscaryterry
elcapitan

Yeah, that was what I was refering to, no the specific part of the article. I've seen it much more here recently. Kind of disgusting and sad, but on the other hand it's good if people show their real face that way.

siliconc0w

The problem with AI is that it isn't like any previous technology. There may be temporary jobs to fill in the gaps but they won't be careers. The AI will do the process engineering and self optimization. The prompt witchcraft is a good example because today its totally unnecessary and doesn't actually increase performance, and they'll continue to make it easier to direct/steer the models.

We're literally trying to build an intelligence to replace us.

righthand

We?

simonw

Loved that section about "meat shields". LLMs cannot be held accountable. Someone needs to be involved in decision making, with real stakes if those decisions are bad.

jppope

the name is very sticky too. I can't imagine not calling people taking the blame meat shields now

buildbot

It just makes logical sense really; the human using the tool is in the end responsible.

Whether the tool is too powerful or ethical to use is an orthogonal discussion, in my opinion. Taken to the extreme, nuclear weapons still need someone fire or drop them. (We should still have discussions on safety and ethics always!)

abstracthinking

Humans will be held accountable, not machines, whatever is the technology used. The jobs you suggest are based on the state of LLM right now, this could change rapidly, considering the state of progress. These are just activities that are already done by people that work with these instruments, because they want to optimize and obtain the best/safest output from these machines.

the_af

> Humans will be held accountable, not machines, whatever is the technology used

Isn't this addressed explicitly in TFA, in section "meat shields"?

As for the rest, if you predict even the jobs described in TFA will be obsoleted by future LLMs+tools, then the future is even more dire than predicted by Aphyr, right? Fewer jobs for humans to do.

quantified

All plausible, but not very transformative. Like imagining that the new jobs enabled for the automobile include automobile maintenance, tire shops, and so on. Traveling nurses, motel operators, military tanks, doordash, suburban life, beer sales at NASCAR, those were all enabled by the car (and its larger sibling the truck). Still missing are the jobs snd industries enabled by "AI" that are not themselves "AI".

pHequals7

we are in the times of irrational exuberance - rationality will set in soon!

elcapitan

Is there a way back to the pre Jeffrey Dahmer world, where human beings are human beings and not "meat"?

sp527

Sure. Would you like WWII, medieval-era Christianity, or Khanate Asia?

cwmma

What are you talking about, the only use of meat is in "Meat Shield", a phrase that's been around a long time now.

jjulius

In the article, Ctrl+F for "meat" returns 3 results, while "human" returns 8. Seems like "human" remains the dominant word of choice in this author's vernacular.

Edit: Further, the only times "meat" appears is in the phrase "meat shield", which is an analogy that is very apt relative to the crux of the article.

Edit 2: "People" appears 13 times!

sebg
fHr

Meat pupeteering, has nothing to do with Jeffrey just state of slowly getting pushed into doing 95% of devwork with Agents.

ai_critic

"meatshield" has the correct connotations for that sort of work.

crote

That is exactly why the term is being used.

A company like Amazon doesn't treat its warehouse workers as human beings. Workers are seen as disposable: forced to piss in bottles, forced to work around the corpses of their collapsed coworkers, paid the absolute minimum possible, and replaced the second they don't operate like a perfect unfailing machine. You aren't viewed like a human, you are a tool. Cattle. A piece of meat they are forced to retain because a robot isn't quite capable of doing your task yet.

The article's use of "meat shields" isn't any different. Humans are going to be hired for the sole reason of taking accountability for actions dictated by AI. They are there only because the company can't put blame on a machine and will be sued to oblivion if there's nobody to blame at all. Your existence as a person is irrelevant, they are just interested in someone with a heartbeat they can blame when stuff inevitably goes wrong.

pmg102
gordonhart

Why post an archive link for a static site with no ads or subscriptionware?

dsmurrell

"Unavailable Due to the UK Online Safety Act" - without my VPN... do you know why?

lbotos

aphyr may have some NSFW photos on the site IIRC which may have got the domain swept up with the new UK laws.

nemomarx

Geo blocking the UK satisfies any age verification, otherwise the site owner would have to check if their content is considered adult in the UK and implement something.

draw_down

[dead]

nonameiguess

This is part 9 of a 10-part series. The author has posted every chapter to Hacker News every day for the past 9 days. Every time four of the first five or so comments are:

Someone noting it is unavailable in the UK.

Someone posting an archive.is link.

Someone asking why the above posted an archive link to a static site.

An answer that it is because the content is otherwise unavailable in the UK.

Do we really need to see this every single time?

I realize I am also not adding to the real discussion now as well, but Jesus Christ, this is irritating. Can we get a new rule that an author posting their own content, knowing it is unavailable in the UK, has to post their own archive link and explain why they're doing so as part of the submission?

tele_ski

https://xkcd.com/1053/

Relax, not everyone sees every article everyday

kreco

I wish we could flag some posts (like as "tangential") instead of this archaic upvote/downvote.

And obviously a way to filter in/out those flags.

jasonmp85

[dead]

pixl97

>Can we get a new rule that an author posting their own content, knowing it is unavailable in the UK, has to post their own archive link and explain why they're doing so as part of the submission?

[Author blocks link to avoid being potentially in violation of the law]

You ask author to willingly provide link to again potentially be in violation of the law

You do not see the irony in your question

ai_critic

I think that this is an interesting attempt at taxonomy, but it's a bit on the magical thinking end (and I say this as somebody that does a good amount of what's described as the incanter role). It's a combination of the author's previous witchy aesthetic (see his excellent "<x>ing the technical interview" series) and progressive labor politics (which are asymptotically doomed in the current automation push).

The biggest failure of imagination, I think, is the assumption we'd use humans for most (or *any) of these jobs--for example, the work of the haruspex is better left to an LLM that can process the myriad of internal states (this is the mechanical interpretation field).

zephyrthenoble

And when the haruspex LLM fails, what do we turn to?

mitthrowaway2

Yes, I had the same impression. I'm sympathetic to the author's perspective but I can't muster even the minimal optimism they've shown here. The "process engineers" as described would themselves quickly be replaced by an automated system. The "statistical engineers", I think, would never be able to keep up with the rate of change of the AI models, which would likely have different statistical behavior and biases in each language/context/etc with each update, and so it's unlikely anyone would pay them to develop that required deep expertise in the first place. More likely, that work would be done at an AI foundation model company -- but it would be done just once, and then incorporated into the training process.

thrance

> and progressive labor politics (which are asymptotically doomed in the current automation push).

What do you mean exactly by this?

jayd16

A magic 8-ball "can process the myriad of internal states" of any questions you throw at it. But we don't use it even tho it can give us answers.

ej88

I am personally of the opinion that ML will end up being 'normal technology', albeit incredibly transformative.

I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'. Jobs that encompass a role that requires accountability will be directing, providing context, and verifying the output of agents, almost like how millions of workers know basic computer skills and Microsoft Office.

In my opinion, how at-risk a job is in the LLM era comes down to:

1: How easy is it to construct RL loops to hillclimb on performance?

2: How easy is it to construct a LLM harness to perform the tasks?

3: How much of the job is a structured set of tasks vs. taking accountability? What's the consequence of a mistake? How much of it comes down to human relationships?

Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.

On Model Trainers -- I'm not so convinced that RLHF puts the professional experts out of work, for a few reasons. Firstly, nearly all human data companies produce data that is somewhat contrived, by definition of having people grade outputs on a contracting platform; plus there's a seemingly unlimited bound on how much data we can harvest in the world. Secondly, as I mentioned before, the bottleneck is both accountability and the ability for the model to find fresh context without error.

netcan

In some sense, technology is "not normal" regardless.

If we think of the digitization tech revolution... the changes it made to the economy are hard to describe well, even now.

In the early days, it was going to turn banks from billion dollar businesses to million dollar ones. Universities would be able to eliminate most of their admin. Accounting and finances would be trivialized. Etc.

Earlier tech revolution s were unpredictable too... But at lest retrospectively they made sense.

It's not that clear what the core activities of our economy even are. It's clear at micro level, but as you zoom out it gets blurry.

Why is accountability needed? It's clearly needed in its context... but it's hard to understand how it aggregates.

xienze

> Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.

Why can't LLMs and agents progress further to do this software engineering job better than an actual software engineer? I've never seen anyone give a satisfactory answer to this. Especially the part about making mistakes. A lot of the defense of LLM shortcomings (i.e., generating crappy code) comes down to "well humans write bad code too." OK? Well, humans make mistakes too. Theoretically, an LLM software engineer will make far fewer than a human. So why should I prefer keeping you in the loop?

It's why I just can't understand the mindset of software engineers who are giddy about the direction things are going. There really is nothing special about your expertise that an LLM can't achieve, theoretically.

We're always so enamored by new and exciting technology that we fail to realize the people in charge are more than happy to completely bury us with it.

Aperocky

As an engineer, I'm never more excited about this job.

My implementation speed and bug fixing my typed code to be the bottleneck - now I just think about an implementation and it then exist - As long as I thought about the structure/input/output/testability and logic flow correctly and made sure I included all that information, it just works, nicely, with tests.

Unix philosophy works well with LLM too - you can have software that does one thing well and only one thing well, that fit in their context and do not lead to haphazard behavior.

Now my day essentially revolves around delivering/improving on delivering concentrated engineering thinking, which in my opinion is the pure part about engineer profession itself. I like it quite a lot.

hombre_fatal

I mostly agree with you.

Though something I half-miss is using my own software as I build it to get a visceral feel for the abstractions so far. I've found that testability is a good enough proxy for "nice to use" since I think "nice to use" tends to mean that a subsystem is decoupled enough to cover unexpected usage patterns, and that's an incidental side-effect of testability.

One concern I have is that it's getting harder to demonstrate ability.

e.g. Github profiles were a good signal though one that nobody cared about unless the hiring person was an engineer who could evaluate it. But now that signal is even more rubbish. Even readmes and blog posts are becoming worse signals since they don't necessarily showcase your own communication skills anymore nor how you think about problems.

rootusrootus

> My implementation speed and bug fixing my typed code to be the bottleneck

I remember those days fondly and often wish I could return to them. These days it's not uncommon to go a couple days without writing a meaningful amount of code. The cost of becoming too senior I suppose.

the_af

> As an engineer, I'm never more excited about this job.

How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?

Everyone thinks it won't be them, it will be others that will be impacted. We all think what we do is somehow unique and cannot be automated away by AI, and that our jobs are safe for the time being.

sscaryterry
elcapitan

Yeah, that was what I was refering to, no the specific part of the article. I've seen it much more here recently. Kind of disgusting and sad, but on the other hand it's good if people show their real face that way.

siliconc0w

The problem with AI is that it isn't like any previous technology. There may be temporary jobs to fill in the gaps but they won't be careers. The AI will do the process engineering and self optimization. The prompt witchcraft is a good example because today its totally unnecessary and doesn't actually increase performance, and they'll continue to make it easier to direct/steer the models.

We're literally trying to build an intelligence to replace us.

righthand

We?

simonw

Loved that section about "meat shields". LLMs cannot be held accountable. Someone needs to be involved in decision making, with real stakes if those decisions are bad.

jppope

the name is very sticky too. I can't imagine not calling people taking the blame meat shields now

buildbot

It just makes logical sense really; the human using the tool is in the end responsible.

Whether the tool is too powerful or ethical to use is an orthogonal discussion, in my opinion. Taken to the extreme, nuclear weapons still need someone fire or drop them. (We should still have discussions on safety and ethics always!)

abstracthinking

Humans will be held accountable, not machines, whatever is the technology used. The jobs you suggest are based on the state of LLM right now, this could change rapidly, considering the state of progress. These are just activities that are already done by people that work with these instruments, because they want to optimize and obtain the best/safest output from these machines.

the_af

> Humans will be held accountable, not machines, whatever is the technology used

Isn't this addressed explicitly in TFA, in section "meat shields"?

As for the rest, if you predict even the jobs described in TFA will be obsoleted by future LLMs+tools, then the future is even more dire than predicted by Aphyr, right? Fewer jobs for humans to do.

quantified

All plausible, but not very transformative. Like imagining that the new jobs enabled for the automobile include automobile maintenance, tire shops, and so on. Traveling nurses, motel operators, military tanks, doordash, suburban life, beer sales at NASCAR, those were all enabled by the car (and its larger sibling the truck). Still missing are the jobs snd industries enabled by "AI" that are not themselves "AI".

pHequals7

we are in the times of irrational exuberance - rationality will set in soon!