Climategate II
Is this one of those cases where the sequel will be better than the original?
Or will it just go down the rabbit hole? The NY Times is doing its best to assure that, with an article whose thrust is to pooh-pooh the new emails’ significance. Its first sentence is typical of the MSM coverage of Climategate I, as well, “The anonymous hacker who…”
Anonymous”? Yes. But “hacker”? We don’t know; most of the speculation I’ve seen has always been that the release of the Climategate emails was an inside job.
Funny, isn’t it, how that anonymous “hacker” leaking climategate fraud becomes a global patriot when leaking U.S. state secrets.
Some of these new emails are really amazing. for example, Phil Jones admitting that the models all are wrong; Mike Mann criticizing Judith Curry for not “supporting the cause”. They really are more damning than the first set.
The person, or persons, who have released these is known, ironically, as FOIA. This latest release also contains a zip file that is password protected. Some one is playing a very smart game here. With that zip file now in general distribution, all that is needed is for the password to be released.
Read some of first juicy tidbits at Wattsupwiththat.com or a Joannenova.com.au The corruption of tha science exhibited by the players is incredible.
Don Surber has the full text of damage control email being sent out:
http://blogs.dailymail.com/donsurber/archives/46783#more-46783
Of course the models are wrong. There is no earthly way that a model could accurately reflect a phenomenon as complex as climate.
Every model starts with some assumptions; these drive the rest of the process. I do not recall the Scientific Community publishing a comprehensive list of the initial assumptions for the models they use. Secondly, and this goes hand in hand with assumptions, complex models must have some simplification built in. What is decided to include or exclude is crucial.
Finally, before certification a model must be extensively tested. How were the climate models tested? We know that in many ways their output has not tracked with reality.
The Scientific AGW community has generally resisted all efforts to obtain their records so that their work can be verified. As the Brit AGW guru–whose name I forget–said: “(sic) why should I provide my notes to people who just want to find fault?”. That is not the reaction of honest scientists.
Beyond that modeling, we have been presented time after time with evidence of manipulating the input data. We are told that we should accept that the manipulation is reasonable and does not distort the output. Ok. As we used to say in Florida; “I got some nice waterfront property for sale–sight unseen.”.
Some commenters at WUWT have taken to calling the hacker, Mr.FOIA.
The Hockey Team has an enemy in Mr. FOIA who is intent on bringing their beliefs and activities into the light. Those of us who are “deniers” are thankful because the Hockey Team and its MSM propaganda arm have had the upper hand for some time. Many things we suspected have been confirmed by the e-mails.
There are only two reasons for acting in a secret and overbearing way on a
scientific issue of this nature.
1. There is a mint to be made from pushing a narrative of alarm that creates all kinds of profit opportunities.
2. The theory that CO2 can create catastrophic warming requires that the solution can only come from powerful, centralized government. This is a powerful attractant for those of a progressive bent.
IMO, the Hockey Team acts in such a secretive and persistently unethical manner for both reasons. However, they rationalize that they are doing the Lord’s work (Gaia being their Lord) and are above reproach.
Occam’s Beard:
the Brit AGW guru—whose name I forget—said: “(sic) why should I provide my notes to people who just want to find fault?”. That is not the reaction of honest scientists.
Indeed. Those who posit a scientific theory deserving of the name welcome attempts to disprove it.
What did it for me was the data fudging.
This second set of e-mails is icing on on the cake- or frosting on the fudge(d data).
My mistake: I was quoting Oldflyer, not Occam.
the first climategate emails were left on an FTP server which was open to downloads. no hacking, nothing needed, just curiosity and one of the persons downloading exposing it out of what probably was more than one doing so since the server was used to trade files and information as FTP servers often are.
The emails have been made available once again in a free for all from a Russian server called Sinwt.ru. Mysteriously, the server has since gone offline. However, thousands of bloggers still had time to download and share copies of the new release of a 173MB zip file called “FOIA2011.”
Analysts are convinced the release was timed to cause maximum impact on this week’s international COP 17 climate summit in Durban
http://johnosullivan.livejournal.com/42066.html
kind of funny, buy IF you take the time to REALLY look deeply at big troubles in the world, it so frequently points to the same place that one can barely imagine how peaceful the world would be without that one thing always involving itself all over…
note that the ip address is in east germany…
lat 51 lon 9.0000 (near marburg)
The notable difference between the ‘Climategate1.0’ and the new ‘2.0 version’ is that the whistleblower this time has added his/her own personal message which includes the plea, “Today’s decisions should be based on all the information we can get, not on hiding the decline”
from another source:
correction not east Germany… kind of middle Germany
On another note…
CO2 actually COOLS our atmosphere
Nitrogen and Oxygen do not absorb infrared, ergo they do not emit infrared. CO2 absorbs infrared, and so emits infrared.
given this, CO2 emits its energy as infrared, and what remains is a cool atom whose temperature is below the energy needed to emit. however, CO2 does get hit by warm oxygen and warm nitrogen.
since they do not emit infrared, they can take on more energy states than CO2. when they hit each thermal energy is transferred to the CO2 whose emit temperature is lower, and so it emits that energy as infrared.
see: Role of heat reservation of N2 and O2 and the role of heat dissipation of CO2 and water vapour
(i have not found this pdf yet)
http://www.palmbeachdailynews.com/news/global-warming-a-hoax-for-political-gain-larry-1981280.html
The luncheon meeting was held at The Colony. Bell, an endowed professor of space architecture at the University of Houston, has received awards for his work regarding international space development. He also is a weekly columnist with Forbes.com.
Bell said people have promoted global warming theories for political gain.
-==-=-
There needs to be a climate scare in order to justify growth in government, particularly in the Environmental Protection Agency, as well as to appease powerful environmental lobbies, he [Bell] said.
since they do not emit infrared, they can take on more energy states than CO2. when they hit each thermal energy is transferred to the CO2 whose emit temperature is lower, and so it emits that energy as infrared.
A little off base there, Art.
You’re correct in saying that O2 and N2 have no IR-active vibrational modes (being centrosymmetric diatomics, they only have Raman-active modes), but not in saying that they can take on more energy states (i.e., have more vibrational energy levels) than CO2. (Linear molecules have 3N-5 vibrational modes, where N = number of atoms.)
Thermal energies at room temperature are 210 cm-1 (kT, where k is the Boltzmann constant and T is the absolute temperature in degrees Kelvin); the only IR-active vibrational mode of CO2 is at 2345 cm-1 (IIRC). Being more than 10 kT above thermal energies, the IR-active mode of CO2 is not going to be activated by collisions.
All that having been said, I believe AGW is merely a convenient vehicle for the “progressives” to extend government control over the economy. From each, to each, and all that.
thanks occam, i am very poor at stating the thing i mean to say…
how would i put it that since they emit at a higher energy, the one that emits at the lower one would emit before reaching the higher modes of the others?
there are papers on collision quenching where the ir is emitted
cant translation-translation and translation-vibration through many interactions in the atmosphere and no where to lose the energy cause emit?
while one collision cant, several can
isn’t that transference how a n2/co2 laser works?
transfering the higher energy of n2 to co2 which then emits ir?
i gots to go back and read more… 🙂
thats the problem with a broad knowlege
It is simply unknown whether CO2 warms or cools atmosphere since the problem of radiative heating is unbelievably complex. It depends on time of relaxation of exited molecules and relative importance of non-emiting and emiting exitation/relaxation. Such quantum mechanical calculations are beyond capabilities of supercomputers, and to observe these effects in experiment under realistic conditions of trace amounts of CO2, convection amd evaporation/condensation is not possible. The very existence of greenhouse effect in real atmosphere is a pure speculation, and estimation of magnitude of this effect (if it exists) is not possible.
Sergey’s statement goes pretty much over my Physics-challenged head. I have read credible sources that support his thesis. I have also read that it can be observed that increased atmospheric CO2 follows atmospheric heating; it does not precede, or cause, it. Like most assertions about this issue, it is difficult to determine their credibility; sometimes even their veracity.
My comments on the complexity of modeling flow from a certain, albeit limited, level of education in the subject, and input from friends who not only studied the subject in greater depth, but worked in the field performing computer based modeling of relatively complex problems–although nothing approaching the complexity of the world’s climate.
Trying to impart to non-scientists the basis for my skepticism about AGW has proven difficult.
The two most telling arguments for such an audience are first asking them what proportion CO2 comprises of the atmosphere (usual guess: 5%, off by a factor of 100), and then likening the atmosphere to the population of the US. In that case anthropogenic CO2 amounts to adding about 4500 people to 150,000 exactly equivalent people already here. Kinda hard to believe that makes much difference.
The second argument concerns computer modeling, regarding which, as an experimentalist, I am profoundly skeptical. Modeling the climate must be a problem at least as difficult as modeling the economy (and probably more so).
Yet despite the immense resources of the financial community, and the incredible motivation to do so, no one can reliably predict (not guess) the state of the economy over any reasonable period. (Hello, Long-Term Capital Management! How’d those Nobel Prizes in economics work out for ya?)
So Wall Street, the City in London, etc., which can throw their pick of mathematicians and programmers and hardware into the fray, have failed, a few smelly-socks graduate students potting away for a few years apiece in various backwaters have cracked a more difficult problem, and can now model the entire climate?
Sorry, no sale. Their “models” are hopelessly and fundamentally flawed in some fashion. I guarantee it. (For example, something like the Mars Climate Orbiter fiasco; one group used miles, the other kilometers. Oops.) There’s a reason why they won’t release their algorithms.
The core of the problem with models that they have dosens of free parameters, that are used to fit model output to existing data sets. That makes them infinitely malleable, such that they do not restrain output in any meaningfull way. Ability to faithfully reproduce any given data has its dark side: after further tinkering they will be able reproduce anything, so are useless for prediction. As von Neimann once said, “with 4 parameters you can fit an elephant, and with 5 you can make him winggle the trunk”.
The whole saga of AGW looks now as Big Government solution in search of a problem to solve. Nevermind, after it falls flat they find another problem with the same solution, and will scare the world with apocalipse if it would not obey.
the back radiation problem is easy
you take a solar heater, and point it to the sky at night
if it gets warmer, then there is back radiation
some guys at harvard did that…
but as with other things, their work never made it past the guardians.
also, starlight goggles and such would not work if the premise was correct… you would not see clear images as you do see, but cloudy ones or even worse hazy foggy images.
the testing of the concept is not all that hard…
glass tube, infrared laser…. etc
but no one is trying to disprove it they all like lysenko want to prove it for the political end…
which is why i say, when global totalitarianism is said to be the only solution, then perhaps they are lying… since global human enslavement cant be a solution to much of anything… (even less so given history)
and to occam… now your talking MY 30 years expertise area.. applications engineering (and models are an application)
The second argument concerns computer modeling, regarding which, as an experimentalist, I am profoundly skeptical. Modeling the climate must be a problem at least as difficult as modeling the economy (and probably more so).
the economy would be EASIER…
the data available, the fact its not infinite decimal places and so on, puts them in two different worlds
the atmospheric models are GHASTLY and completely ignore fundemental problems in information engineering becaus ethe modelers are not information engineers or even hired ones, they are researchers who cobble together something that seems to work, but in complete ignorance does not handle the impossible it has to answer.
the simplest crock to point out is the rounding problem. (there are over 30 ways to round and people dont even beleive that!)
but it goes to the chaos theory stuff that most people kind of have heard with the butterfly effect.
but lets put this facet in terms ANYONE with basic math can understand. I have always been like feynman in that i dont like talking shop language. if you cant say it in plain english (or yuor native normal tongue) then you dont really know it!
anyway…
there are two things that collide here.. .
if you accept that all the other stuff is valid that they do (and its even worse than what i am about to explain given that each earth square in their model is around 400 kilometers on a side and is missing stuff that even a grade school kid would ask about if you list what it does pretend to put in)
ok… first part of the FUNDEMENTAL problem is starting positions and information of the model.
this even confounds fourier transforms which dont like abrubt starts or endings (and they ahve ways of sort of dealing with it)
but the point is measuring the starting data.
the OTHER fundemental part that goes hand in hand, is how many decimal places will you include and what kind of rounding you will do.
this is a limitation that the machines your on have, even with long math… no machine can represent pi accurately. at some point you have to round
and we dont actually know if reality rounds!
ie, do decimal places stop at the level of planck lengths or times?
ok… so you start with data that is slightly off.. and your numbers do not go down to the 30th decimal place..
and here is where this all comes together easy
the system is of a special class of mathematical iterative systems where the input to the computation was the output of the prior calculations
like pseudo random number generators, the seed (starting data), is used to calculate the output, then the output is used to be the next input for the same formula
lots of things exist this way… cellular based life is that way, the child that you pick is the starting point of the future… and so on
this is why orbit calculations cant tell you where the planets will be after a certain number of iterations as your model will deviate more and more and more from reality as the errors compound over time
A = 1.000003 is not the same as B= 1
a iterative series of B would be the number line
1, 2, 3, 4, 5, 6
1.00003, 2.000006, 3.000009, etc
this is fundamental….
oh… and Heisenberg uncertainty principal means that the starting position information is IMPOSSIBLE to ascertain… even for one particle….
i could go on and on listing tons of other things too, and how you handle it will make a difference.
for instance… if your calculations are 3.000009 in decimals, and your output is limited to integers, or only two decimals, your number line will be fine.
but at some point in the iterations, those tiny numbers will add up and your going to skip an integer in the series
ALSO, most researchers are ignorant of this, and they work from the old idea that they can isolate things, and that tiny amounts dont mean much.
but the real truth in the world is that there is a percentage of cancers that are caused by a subatomic particle that could have come from 300 million light years away from a super nova, and reached the solar system going really fast around the galaxy, and a planet that is moving real fast arund the local gravity well, and that is rotating as its traveling, and that person was just in the right place so when that high energy particle reaches them, the particle smashes into a DNA molecule in a way that its not corrected. perhaps flipping a methelation switch and then putting the cellular lineages on another path
science pretends to ignore that for the convenience of getting to some answer, rather than never getting there.
so its not so hard to understand why their models are going to fail fundementally
does this mean models are not good or usable? nope. but it does mean that if your ignorant of these basics, then your going to pretend that your model can run forward millions of iterations and that .00001 isnt going to cause a problem.
but when your data deviates, then what do you do? well, in ignorance you start to fix the data… you start to play with it. you certainly dont have enough information theory and sciecne to know why its deviating.
and guess what? the second you do that, the model is so erroneous its useless.
at least the one that deviates will follow close enough to be useful till it deviates too far, but the one that tweaks trying to remove the fundemental flaw that cant be removed, is just plain unreal, and never ever gets rid of it, it just lies to push it off and then ignore it.
ie. they start a millino years ago, then get up to today, they correct the data so that the millino year iterative game matches reality, then assume going forward there is no more problem.
but they did not get rid of the actual problem.
and they pushed the test off a million years into the future…
its that simple to prove them wrong from an information science aspect…
better models could do better, but the whole class of these models and software i have examined are very complex but their models serve very little real world usefullness, despite being used for political games (and global warming is just a tiny part).
a whole domain of scicne and policy decision actions based on these complicated models that pretend there is no problem, are a joke
did i explain that in a way that was easy enough?
another abstract way to put it is that the data skew from rounding is like playing with loaded dice you dont know are loaded slightly
the economy would be EASIER…
the data available, the fact its not infinite decimal places and so on, puts them in two different worlds.
Thanks, Art. I didn’t know this for a fact, but presumed it to be so. I suspect that not all of the zeroth-order phenomena are known for climatology, and that therefore the models are rather like Lord Kelvin’s calculation of the age of the earth from the rate of cooling, which neglected the (then unknown) heating effect of radioactive decay. And never mind the cross-terms (how atmospheric physics and chemistry interacts with the biology of phytoplankton (phytoplankta?).
The economy/climate modeling dichotomy usefully brings home the point to non cognoscenti; that despite the availability of enormous resources and the motivation of untold wealth, reliably modeling the economy – a simpler system – has proven elusive.
and i know you get this, but just to make it a tiny bit clearer. just because the economy may be simpler in terms of data, and having it and all that. its in no way simple, since this is a RELATIVE statement concerning the two (not the two in the spectrum of the whole)
quants who are physicists who do financial analysis (which i did some related stuff, but am not really a quant), do have great success in predicting parts of the market. but as always, the market will remain unpredictable as a whole, because you have a billion independent computers (people) and more out there making choices in which you can mostly see only their aggregate outcome (magnify too much and you drown in data you cant even scan through. currently i have a solution for the “unstructured data problem” but am having a bear of a time with it, as its solution seems to the people to be too simple. ie, they WANT complexity and no longer get that a simple solution that cuts to the core is THE best solution and is HARDER than complex solutions)
could einsteins theory be much larger and more complicated than stated? of course… but the fact its so simple that it captures the essence that makes it so great
on another point they ARE confirming the “faster than light” speed of the neutrino….
however, this does NOT violate Einstein, it CONFIRMS him more!!!!!!!!!!!!!!
the SIMPLE answer is in front of them but they are so used to complex answers (string theory, what a crock), that they dont see a simple answer that doesnt violate anything but instead confirms it.
ie… the neutrino being its own antiparticle is not mass bound… ie… it does not follow the warped curve of space but moves in a straight line RELATIVE to that warp.
i WISH i could express this and get credit for it, but its jut going to be stolen and thats it… sad really as the orignator has a lot more insights to share!
the key to THAT problem is to NOT FORGET That space time is curved… and so a photon which IS affected by gravity has to follow the gravity well, and in that well, we feel everything is flat and normal
however, if your going to measure ACROSS a gravity well using a nutrino, you will get the true FLAT universe span, not the curved one as the nutrino does NOT follow the curve….
think of it like a real well, and to transverse it a crawling bug (affected by gravity) has to crawl down the well, across the well, and up the other side… at the speed of light. so if any person was in that well, they would see the bug move at the correct RELATIVE speed… however, there is also a flying bug… its not bound by gravity… (its a gedanken, go with it 🙂 ) and so it just flys across the gravity well… to someone IN THE WELL, it appears to be going faster than light… but it hasnt…
there are lots of similar illusions… and relativity points them out as perceptive. in a universe of only two particles moving away from each other at the speed of light, with no other particles as a third reference, would look like the one your standing on was still while the other was moving away at twice light speed.
the SAD part is that this is a simple premise that comes from special relativity concepts of curved space time.
so unlike global warming i will, like einstein, tell you how its falsified (maybe)
the speed of traversal will be different depending on whether you go across the wells center vs traversing some point in the well not in the center.
ie… the more the fly follows the curve of the outer wall of the well, the less the difference will appear.
so in space where there is little matter, light and nutrinos move the same speed. but across the well, like a planet with lots of gravity, the light follows the well, and the nutrino doesnt.
so the amount of light speed should shift based on which part of the curved space it traverses…
now… if i am right… and i figured it out the first day they said the results… i should get a nobel for that idea AND working out the math… but i wont work out the math, as i already know that even if i had it perfect, there is no one who will listen to an uncredentialed patent clerk, i mean nobody today as merit is not the key…
having a hole for entry or tanned skin is now what defines superior ability in physics, biology, etc. and in fact, if you surgically give yourself such a hole, your able to change your ability in physics, biology, gentics, math and such!
so personally i just hope someone will just remove me from the playing field so that i can stop being tortured by the fact i don’t have a tanned skin or a proper socially acceptable hole…
by the way…
one last thing (sorry i am so long)
the economy IS reliably modeled..
if you plum the depts of order, the idea of whats predictable, would you be able to reduce the ‘volatility’ to zero if there is randomness in the system?
no
so, the fact you CANT predict much means that these people and others with their brains, hunches, and such, have plumbed all the order they can exploit out… and are at the end of that limit!!!
ie. in the absnce of computer trading, and so on, it would be easier to find order to exploit
the problem is similar to data compression
when you compress data (losslessly) you exploit order, until all you have left is quite random as you have milked all the compressible order out of it. lossy commpression says if i can change a few figures, i can, at the sacrifice of order (And energy seeking such points), inject order that then can compress the data farther. but you lose that real data as when you reverse it, there is no way to re- tweek the data points you changed to get a bit more compression
so its not that the stock market is unpredictable, WHATS LEFT of the stock market is unpredictable..
if you saw the data without their mining easy trends, seasons, holidays, college graduations, and even months with more births after 20 years of waiting…
you would plumb those things
but your here later, and the computers have fished out easy trends… the quants were hired to find the more abstract trends, and whats left APPEARS like a lottery…
but its whats missing in the assesment that would make it clearer that its not a lottery…
its just that regular people have to pick over a carcass in which the small parts of meat are random and the big easy parts to cut off have been sliced off by the people mining the order and trends.
i can relate this to tons of systems all around you
but i have no CV per se… i am mostly self taught in all this and more… it USED to garner me respect before i grew up and jumped in the pool of idiots who all claim to be from lake woebegon and are above average… (and quite ready to steal your fire, and drop you into the crab bucket as one good idea is enough… and they rationalize that since i get more, i will be ok… and thats IF they have ANY sympathy… team up and hav emany ideas and so on? no way, self esteem means only they can be on top, so a team on top with them not at the helm isnt worth diddly to them…)
take the unstructured data problem.
i was just told that its an easy problem with an easy solution and that when the problem is big enough someone with deep pockets will just present the answer.
ah.. .this has been a problem since the 70s and is still a problem.
80% of all data we work with is unstructured, unindexed and completely inaccessible due to volume by current methods.
the genetic data problemis but a tiny part of this.
think of this. i have a means and PROVEN means of putting a solution on your desk that can out search 60,000 computers working 100 years… and do so for under 5k in parts. in fact, it can be done as a diy project by the higher skilled people in that realm!!!
but alas, a very famous person working for one of the top sequencing machine companies (wont say if its illumina, pac bio, venter, etc) who i am working with (told you i know people) believes its a tiny problem.
they dont see that they wont sell more machines unless their customers can wade through the data faster.
ie. if it takes them 5 years to plumb the depths of 10 genomes… then your not going to get them to use your service for 5 years. if they can go through that data in an afternoon.. they would want to get more…
so they are shooting themselves in the foot.
but alas, they think a young EE knows more than a 30 year experienced fortune 10 and wall street applications engineer who attended bronx science, has no degrees, and has worked in a degree field since he was 17…
heck.. he did a thesis on this stuff, and i pointed out to the guy on my side who now beleives in me, but is retiring… he doesnt even know Amdahls law
http://en.wikipedia.org/wiki/Amdahl's_law
so NO solution that anyone is currently working on will get over the peta peta peta scale of data!!!
cloud computing cant do it…
but they FORGOT amdahl… (actually never knew)
you let me know how good this is.
i can find the matches of any kind, not just straight and wild card, and can handle data insertions and deletions… its been simulated and works
lets say its the same speed as a desk top…
5ghz… a desk top would take how long to do alignments on 100,000 100base pair sequences to 1000 genomes?
well, a computer on your desk takes MANY clock cycles to do each 100 base pair sequence. and even more if there are insertions and deletions… with piplined processors i cant even give you a good estimate other than… each compare has to be done in many clock ticks…
you have 3,500,000,000,000 bases in 1000 genomes… and if your going to work that data, your going to have to compare every character against every character in your targets.
like the traveling salesman, there is no other way…
so 100k 100 bp sequences represent 10,000,000 characters…
so you have to do 35,000,000,000,000,000,000 compares… since a pc takes nearly 100 operations to do simple compares AND run the operating system too…
that gives you
3,500,000,000,000,000,000,000 compares
at 5 ghz… thats 700,000,000,000 seconds
1,1666,666,666 minutes
194,444,444 hours
8,101,851 days
22,196years
and thats just to find the matches not process the finds!!!!
now… you use my solution and you can go through all that and get the same output in 700 seconds?
and my solution is scalable
two pcs will not do that above in half the time
and 100 will not do that in 100th the time
but if you want, you can do 1 million 100 bp segments in the SAME time
ie… i can do 10 times the data and do it in the same 700 seconds…
if i am allowed to put several solutions together and arrange them…
i could do 1,000,000 genomes in one day
but that solution is not good enough
and the EE and the other think the market is too small
and there wont be enough market for it
ie… they want to build a data center with 100,000 pc computers with support, electricity, cooling, maintenance, personel, retirements, and all that
rather than have something on your desk for 10k
given that.
just shoot me
as i also have other solutions to other big problems as well… but since they are deemed unsolvable, i cant have solved them… or like above, they can only see their genetic problem, not the 80% plus data that all businesses around the world would lvoe to access.
there is state money to pay for it. in fact, just one legal judgment which was lost because they could not find a tiny memo came to 1.4 billion
1% of that would make my solution and that data would be available
and if you use the best parts available? you can run at 70ghz…
meaning i can give you that solution in 10 seconds
but you see i am not authorized to think
i am a white male with aspergers
(thats how i solve i have an eidetic memory and am socially isolated so all i have done was study/memorize and reorganize… )
i have no degrees
i sit in an 85 degree office that is 47×57 inches making me sick… half the size of a handicapped bathroom stall. no one knows i do the work.. and they promoted someone over me who has no experience and has been at this job less than me in every way. while i have prior experience from fortune 10 and wall street
shooting me would be mercy
he doesnt see
erase that last “he doesnt see”