Can someone fill in the missing link in my understanding here? It seems like the post never gets around to explaining why waiting for 14272 years should make the river passable. (Nor why this river in particular, as opposed to any other obstacle.)
The post alludes to a quirk that causes people not to get sicker while waiting; but it says they still get hungry, right? So you can't wait 14272 years there for any purpose unless you have 14272 years' worth of food, right?
IIUC, the blogger goes on to patch the game so that you don't get hungry either. But if patching the game is fair play, then what's the point of mentioning the original no-worsening-sickness quirk?
It kinda feels like asking "Can you win Oregon Trail by having Gandalf's eagles fly you to Willamette?" and then patching the game so the answer is "yes." Like, what's the reason I should care about that particular initial question? care so badly that I'd accept cheating as an interesting answer?
Hi, I'm the guy who discovered the quirk in the first place. You can survive pretty much indefinitely at the river, with or without food. You could cross the river at any point. I just thought it would be a laugh to see if you could get to a five-digit year. Then, upon resumption of the journey, the party very rapidly deteriorates and you can only survive about 5 or 6 days before they're all dead, even if you gather food immediately and wait to restore health. So the unmodded achievement was "I lived for 15,000 years in The Oregon Trail" and then I asked moralrecordings for help in reverse-engineering the game so I could get the satisfaction of a successful arrival.
Just a bit of fun.
edit: And the answer to "Why THAT river?" is simply that it's the last river in the game, and when I was hoping to complete a run without any modding, I thought it might be possible to play normally, get to the final river, wait 15,000 years, and then try to limp my decrepit deathwagon to the finish line before we all expired. This proved impossible, sadly.
Thank you for the context!
I also was a little confused by the goal, but that clears it up.
Could be the terrain and geology. About 15,000 years ago, after the last glacial maximum subsided, the largest flood in history carved out that part of Oregon. Maybe there is a similar timetable where the Columbia is silted up.
From Wikipedia: "The wagons were stopped at The Dalles, Oregon, by the lack of a road around Mount Hood. The wagons had to be disassembled and floated down the treacherous Columbia River and the animals herded over the rough Lolo trail to get by Mt. Hood."
https://en.wikipedia.org/wiki/Oregon_Trail#Great_Migration_o...
How did the wagons avoid sinking / not take on water through the wood plank edges? Constant bailing while on the water?
I think you take the wagons apart and put them on a ~boat~ edit: was it a raft? In this context, "Float them down" doesn't refer to the wagons floating by their own bouancy, but rather their position atop the water.
I think you're right, in Oregon Trail they float on a raft.
Incorrect. It refers to the wagons, which were temporarily converted.
They caulked the wagon[1] turning it into an impromptu boat, which consisted of removing the wheels and axles and then filling in the seams and cracks between the wooden boards of the wagon with soft materials and an oil like tar.
[1]: https://old.reddit.com/r/AskHistorians/comments/6ouy10/june_...
>"Oit came all the old clothes we could spare," he wrote later. "Out came the tar buckets, old chisels and broken knives. They stuffed scrap cloth into creaks and crannies in the wagon and tarred over them.
The mental image conjured up by this scenario is amusing. Your impossibly patient party waits almost 15,000 years to cross a river in a state of suspended animation. Then they finally cross the river and instantly wither away to dust because they had not had a good meal in 15 centuries.
Something that was very common with BASIC interpreters but still baffling is how they were running on machines with extremely limited memory and fairly limited CPU time, but for some reason decided not to make integer types available to programmers. Every number you stored was a massive floating point thing that ate memory like crazy and took forever for the wimpy 8 bit CPU with no FPU to do any work on. It's like they were going out of their way to make BASIC as slow as possible. It probably would have been faster and more memory efficient if all numbers were BCD strings.
BBC BASIC from Acorn in 1982 supported integers and reals. From page 65 of the user guide [https://www.stardot.org.uk/forums/download/file.php?id=91666]
Three main types of variables are supported in this version of
basic: they are integer, real and string.
integer real string
example 346 9.847 “HELLO”
typical variable A% A A$
names SIZE% SIZE SIZE$
maximum size 2,147,483,647 1.7¥1038 255 characters
accuracy 1 digit 9 sig figs —
stored in 32 bits 40 bits ASCII values
A%, A, and A$ are 3 different variables of different types.And to add insult to injury, you write "peperony and chease" on their tombstone.
Edit:
Poor Andy :-(
https://tvtropes.org/pmwiki/pmwiki.php/Trivia/TheOregonTrail
> but for some reason decided not to make integer types available to programmers
They were there, you had to append % to the variable name to get it (e.g. A% or B%, similar to $ for strings). But integers were not the "default."
BASIC is all about letting you pull stuff out of thin air. No pre-declaring variables needed, or even arrays (simply using an array automatically DIM's it for 10 elements if you don't earlier DIM it yourself). Integer variables on BASIC were 16-bit signed so you couldn't go higher than 32767 on them. But if you are going to use your $500 home computer in 1980 as a fancy calculator, just learning about this newfangled computer and programming thing, that's too limiting.
I do remember reading some stuff on the C64 that its BASIC converted everything to floating point anyway when evaluating expressions, so using integer variables was actually slower. This also includes literals. It was actually faster to define a float variable as 0--e.g. N0=0--and use N0 in your code instead of the literal 0.
Floats were 5 bytes in the early 80's Microsoft BASICs, honestly not "massive" unless you did a large array of them. The later IBM BASICs did have a "double precision" float that was 12 bytes maybe?
> It probably would have been faster and more memory efficient if all numbers were BCD strings.
I wouldn't be surprised if Mr. Gates seriously considered that during the making of Microsoft BASIC in the late 70's as it makes it easy for currency calculations to be accurate.
> "for some reason decided not to make integer types available to programmers...It's like they were going out of their way to make BASIC as slow as possible."
BASIC was well-intention to make programming easy, so ordinary people in non-technical fields, students, so people who weren't "programmers" could grasp. In order to make it easy, you better not try to scare adopters with concepts like int vs float and maximum number size and overflow, etc. The ordinary person's concept of a number fits in what computers call a float. You make a good point though that BCD strings might have done the trick better as a one-size fits all number format that might have been faster.
BASIC also wasn't intended for computationally intense things like serious number crunching, which back in the day usually was done in assembly anyway. The latency to perform arithmetic on a few floats (which is what your typical basic program deals with) is still basically instantaneous from the user's perspective even on a 1 MHz 8-bit CPU.
The 6502 ironically enough did support BCD strings directly
> but for some reason decided not to make integer types available to programmers.
Can you expand upon this? All of the research I've done suggest that, not only was it was possible to use integer math in Basic for the Apple II, there are versions of BASIC that only support integers.
https://en.wikipedia.org/wiki/Dartmouth_BASIC
"All operations were done in floating point. On the GE-225 and GE-235, this produced a precision of about 30 bits (roughly ten digits) with a base-2 exponent range of -256 to +255.[49]"
Good find, Dartmouth was the original BASIC for their mainframe timesharing, Apple and other micro variants came later.
Speaking of, John G. Kemeny's book "Man and the Computer" is a fantastic read, introducing what computers are, how time sharing works, and the thinking behind the design of BASIC.
BASIC doesn't have typing, so most BASIC interpreters just used floating point everywhere to be a beginner friendly as possible.
The last thing they wanted was someone making their very first app and it behaves like:
Please enter your name: John Doe
Please enter how much money you make every day: 80.95
Congratulations John Doe you made $400 this week!
Classic BASIC does have typing, it just shoves it into the variable name. E.g. X$ is a string, X% is a 16-bit signed integer, and X# is a double-precision floating point number.
This started with $ for strings in Dartmouth BASIC (when it introduced strings; the first edition didn't have them), and then other BASIC implementations gradually added new suffixes. I'm not sure when % and # showed up specifically, but it was already there in Altair BASIC, and thence spread to its descendants, so it was well-established by 1980s.
Interesting. I wonder if this association of $ with strings is related to the use of $ (rather than NUL) as the string terminator for DOS output routines?
Pretty sure DOS got it from CP/M, but I'm not sure why the latter would have it.
That said, it probably has something to do with earliest 5-bit and 6-bit text encodings that were very constrained wrt control characters, and often originating from punch cards where fixed-length or length-prefixed (https://en.wikipedia.org/wiki/Hollerith_constant) strings were more common. E.g. DEC SIXBIT didn't even have NUL: https://en.wikipedia.org/wiki/Six-bit_character_code
I always just figured ‘$’ looks like S, and S is for String
Whoa, I never realized this. And I spent much time in high school writing programs in QBASIC.
IIRC Python had similar reasoning for making floats the default in version 3 instead of integers, which had been the assumption in 1 and 2 when you entered a number without a decimal. R always did it that way and Julia still assumes integer, which occasionally trips me up when switching languages.
Wozniak's original BASIC for the Apple II only supported integers; when Apple decided they needed floating point and Woz refused to spend time on it, they decided to license it from Microsoft, producing Applesoft BASIC. Applesoft was slower than Woz's BASIC, because it performed all arithmetic in floating point.
As a kid hacking away on an Apple II this was apparent; all the good Basic games were written in Woz’s Integer Basic.
> Something that was very common with BASIC interpreters but still baffling is how they were running on machines with extremely limited memory and fairly limited CPU time, but for some reason decided not to make integer types available to programmers.
To be fair, JavaScript suffers from the same laziness :)
MS BASIC on TRS-80 model 100
default, a normal variable like N=10, is a signed float that requires 8 bytes
optional, add ! suffix, N!=10, is a signed float that requires 4 bytes
optional, add % suffix, N%=10, is a signed int that requires 2 bytes
And that's all the numbers. There are strings which use one byte per byte, but you have to call a function to convert a single byte of a string to it's numerical value.
An unsigned 8-bit int would be very welcome on that and any similar platform. But the best you can get is a signed 16-bit int, and you have to double the length of your variable name all through the source to even get that. Annoying.
I remember having integer variables in Amstrad CPC (Locomotive) Basic. Something with the % symbol. edit: ChatGPT says that BBC BASIC and TRS-80 Microsoft BASIC also supported integer variables with % declaration.
The wikipedia page for Microsoft BASIC (which Applesoft Basic is a variant), https://en.wikipedia.org/wiki/Microsoft_BASIC, mentions that integer variables were stored as 2 bytes (signed 16-bit) but all calculations were still done in floating point (plus you needed to store the % character to denote an integer var).
So the main benefit was for saving space with an array of integers.
Yes, Locomotive BASIC also supported DEFINT command, so, all variables in a given range would be treated as integers without "%" suffix.
Maybe it was a consideration of code size. If you already choose to support floats then you might as well only support floats and save a bunch of space by not supporting other arithmetic types.
Interestingly enough, the article notes that this program was written for Microsoft’s AppleSoft BASIC, but Woz famously wrote an Integer BASIC that shipped on the Apple II’s ROM.
Woz planned to add floating point support to his Integer BASIC. In fact, he included a library of floating point ROM routines in the Apple II ROMs, but he didn't get around to modifying Integer BASIC to use them. He ended up working on the floppy disk controller instead.
When he finally got around to doing it, he discovered two issues – Integer BASIC was very difficult to modify, because there was never any source code. He didn't write it in assembly, because at the time he wrote it he didn't yet have an assembler, so he hand assembled it into machine code as he worked on it. Meanwhile, Jobs had talked to Gates (without telling him) and signed a deal to license Microsoft Basic. Microsoft Basic already had the desired floating point support, and whatever Integer BASIC features it lacked (primarily graphics) were much easier to add given it had assembly code source.
https://en.wikipedia.org/wiki/Integer_BASIC#History
I was thinking about this the other day, I wonder if anyone has ever tried finishing off what Woz never did, and adding the floating point support to Integer BASIC? The whole "lacking source" thing shouldn't be an issue any more, because you can find disassemblies of it with extensive comments added, and I assume they reassemble back to the same code.
The Apple //e ROMs had AppleSoft BASIC. Integer basic could be loaded from the original DOS disks
According to Wikipedia, the original Apple II ROM had Integer BASIC. Apple licensed AppleSoft in response to customer demand for a floating point BASIC, and it became the in-ROM BASIC for the Apple IIc and IIe.
Isn’t that what I just said?
You phrased your comment as a contradiction of mine. I elaborated on my comment, focusing on the distinction between Apple II and Apple IIe, including the IIc which came between, and citing my source.
Most of the 8-bit BASICs of the time share a common ancestor. Perhaps making every number a floating point was a reasonable decision for the hardware that the common ancestor BASIC was written for and it just got carried over through the generations.
I think it's more likely that the language had no concept of types so number had to "just work". You can do integer math (slowly) using floating point, but you can't do floating point math with integers. Especially since the language is targeted at beginners who don't really understand how their machines work.
Would have been interesting to see a version of BASIC that encoded numbers as 4 bit BCD strings. Compared to the normal 40 bit floating point format you would save memory in almost every case, and I bet the math would be just as fast or faster than the floating point math in most cases as well. The 4 bit BCD alphabet would be the numbers 0-9, as well as -, ., E, and a terminator and a coupld of open numbers if you can think of something useful. Maybe an 'o' prefix for octal and a 'b' for binary?
If you look at the ads for microcomputer software, there were a lot of business-related stuff, i.e. AR, AP, etc. Stuff that a kid in public school had no idea what those acronyms meant.
If you're writing business software, you'll need to support decimals for currency-related calculations. Tax and interest rates also require decimal values. So floating point helped a lot.
When the 8-bit microcomputers went mainstream, (Apple II, Commodore PET, TRS-80), graphics got more popular - sin(), cos(), and other trig functions are popular and their return values are never normally expressed as integer values.
Sure, most would never write a fast arcade-like game in BASIC, but as a code trial playground, turnaround time was relatively quick.
I don't understand your argument
Especially when doing financial calculations you do not want to use floating point but fix point O_o
Explain fixed-point math to a small-to-medium sized business owner.
With signed 16-bit integers (which Apple Integer Basic provided), you've got a range of 32767 to -32768 (wikipedia says Apple Integer Basic couldn't display -32768). But if do the naive fixed-point using 16-bit ints, you'll have a range of 327.67 to -327.68, assuming 2-digit for decimals.
16-bit integers didn't have enough precision for many of those 1970s/1980s use cases.
yes, floating-point math has problems. but they are well-known problems - those corner cases were well-known back then.
I'd rather explain fixed point math to a small business owner than explain to his accountant why pennies keep randomly disappearing and popping into existence.
You want to try to explain fixed point math to someone who is for the first time discovering the concept of a "variable"?
It only lets you store whole pennies, not fractions of a penny. This is to help stop rounding errors accumulating, and eventually making a calculation wrong. With fixed point, if you put the formula in right, there will be no surprises.
That knowledge was less widespread at the time.
...except that using floating-point values to represent currency was never recommended because of precision issues. Using fixed-point or "integer cents" was preferable.
Atari BASIC used BCD for numbers. It is notably not a Microsoft BASIC descendant. Cromemco BASIC is another example.
that would be 15 millennia or 150 centuries
It’s like knew how popular JS would be someday.
Sounds like JavaScript!
around 1999 when I stumbled upon JavaScript I was aghast that numbers were always 64-bit floating point. I thought that language would go nowhere.
[dead]
You board the Generation Ship Oregon Trail with some trepidation. If the scientists are correct you will be in suspended animation for the next 14272 years. You already feel colder somehow. To the West you see a robotic barkeep.
Pedantic note - if you have suspended animation, you don’t need Generation ships.
"Excuse the mess. Most unfortunate. A diode blew in one of the life support computers. When we came to revive our cleaning staff, we discovered they'd been dead for thirty thousand years. Who's going to clear away the bodies? That's what no-one seems to have an answer for."
First Class tickets get suspended animation tanks
Sometimes, I hate working with code where the developer was either a Basic developer or a mathematician: variable names limited to two characters (like "H" for health and "PF" for pounds of food remaining) work when when manipulating an equation and are a lot better than 0x005E, but the code isn't nearly self-documenting. On the other hand, the variable name could be "MessageMappingValuePublisherHealthStateConfigurationFactory". Naming things is one of the hard problems in computer science, and I'm glad we're past the point where the number of characters was restricted to 2 for performance reasons.
Unrelated, my monitor and my eyeballs hates the moire patterns developed by the article's background image at 100% zoom - there's a painful flicker effect. Reader mode ruins the syntax highlighting and code formatting. Fortunately, zooming in or out mostly fixes it.
over the years i've had to translate a lot of code from academics/researchers into prod systems, and variable/function naming is one of their worst habits.
just because the function you're implementing used single-character variables to render an equation in latex, doesn't mean you have to do it that way in the code.
a particular peeve was when they make variables for indexed values named`x_i` instead of just having an array `x` and accessing the ith element as `x[i]`
At least I've never seen UTF8 math symbols in the wild. Julia, Python, and other languages will let you use the pi symbol for 3.14... instead of just calling it pi.
I've seen that. Some Haskell libraries use Unicode for custom operators. Makes the code even harder to understand.
have you seen arthur whitney's code style?
https://www.jsoftware.com/ioj/iojATW.htm
i tried this style for a minute. there are some benefits, and i'll probably continue going for code density in some ways, but way less extreme
there's a tradeoff between how quickly you can ramp up on a project, and how efficiently you can think/communicate once you're loaded up.
(and, in the case of arthur whitney's style, probably some human diversity of skills/abilities. related: i've thought for a while that if i started getting peripheral blindness, i'd probably shorten my variable names; i've heard some blind people describe reading a book like they're reading through a straw)
40x25 text screens and line-by-line editors encourage short variable names as well
Also some of that older stuff it can be the compiler only let you have 8 chars for a variable name.
Applesoft BASIC only uses the first two characters (!) to distinguish one variable name from another. WAGON and WATER would be the same.
(page 7)
https://mirrors.apple2.org.za/Apple%20II%20Documentation%20P...
This is generally true of most 6502 Microsoft BASIC derivatives, Apple and Commodore included.
That was true of most BASIC dialects of that era. Many hard to track down bugs were introduced when programmers didn't understand this and had WATER overwrite WAGON.
Amstrad CPC's Locomotive BASIC supported up to 40 character variable names with no trimming.
Pretty sure the c64 did the same thing...
> variable names limited to two characters
(It sounds like there was a justified reason for that here, though -- the variable names are not minimized during compilation to disk.)
On the other hand, sometimes less descriptive but globally unique names add clarity because you know what they mean across the program, kinda like inventing your own jargon.
Maybe "PF" is bad in one function but if it's the canonical name across the program, it's not so bad.
and then there are the people who name their variables Dennis...
"The game dicks you at the last possible moment by expecting the year to be sensible"
Great read on how to actually hack. Takes you through the walls he hits and then how by hitting that wall it "opens up a new vector of attack"
> Several days later, I tried writing a scrappy decompiler for the Applesoft BASIC bytecode. From past experience I was worried this would be real complex, but in the mother of all lucky breaks the "bytecode" is the original program text with certain keywords replaced with 1-byte tokens. After nicking the list of tokens from the Apple II ROM disassembly I had a half-decent decompiler after a few goes.
Applesoft has a BASIC decompiler built in, it's called "break the program and type LIST". Maybe Oregon Trail did something to obscure this? I know there were ways to make that stop working.
If I remember correctly applesoft also had a few single bytecodes that would decode to other key words. Like PRINT and ?. But I could be remembering badly.
Depends on the version. The original was BASIC, but the one with graphics and sound (which I think was more popular?) was assembly.
Wikipedia implies this version was mostly BASIC, the hunting minigame was in assembly.
Yes, a few minutes spent reading about Applesoft BASIC or Microsoft BASIC would've reduced the cringe factor in reading a neophyte trying to mentally grapple with old technology.
"bytecode" and "virtual machine", no, no, no. That's not the path to enlightenment...
in this case, print debugging, is your best bet.
> So 1985 Oregon Trail is written in Applesoft BASIC
This surprised me for some reason, I guess it's been 30-some years but I remember my adventures in Apple II BASIC not running that quickly, but maybe Oregon Trail's graphics was simpler than I remember
I guess I just assumed any "commercial" Apple II games were written in assembly, but perhaps the actions scenes had machine code mixed in with the BASIC code.
There are so many different versions of Oregon Trail, you might have played the old version first but substituted the graphics and game play you remember with a later version you also played. Not to mention that imagination fills in a lot of the details when you're playing those games, usually as a child.
There are two versions of Ultima 1, the original has BASIC is basic with assembly and there is a remake in pure assembly. You can definitely tell the improvements the asm version brings with the overworld scrolling faster and the first person dungeons redrawing very quickly.
So - I'm guessing game logic of MECC Oregon was in Basic with some assembly routines to re-draw the screen. BTW original Oregon Trail was also 100% basic and a PITA to read. You're really getting to the edges of what applesoft basic is practically capable of with games like Akalabeth and Oregon
That reminds me of finding out Sid Meier's Pirates! on the C64 was a mix of BASIC and assembly. You could LIST a lot of it, but the code was full of SYS calls to various assembly helpers, which I remember was incredibly frustrating as I did not yet have any idea how assembly worked so it felt so close but so far to being able to modify it.
Wikipedia tells me that the 1985 version's hunting minigame is in assembly; it does not explicitly say that the rest is in Basic but it definitely implies this.
Oregon trail was conceptually simple and so well crafted BASIC would be plenty fast. Most other games were move complex and probably needed assembly. Though it was common to call inline assembly (as binary code) in that era as well.
Not uncommon, at least on the A2 and C64, to have a BASIC scaffold acting like a script that runs various machine language subroutines and/or the main game loop.
I also thought it was interesting that it was actually several BASIC programs with data passed back and forth by stuffing it in specific memory locations.
As an old AppleSoft Basic/65C02 assembly language hacker, I actually followed the article from first principles and I remembered how AppleSoft basic was stored in memory - one byte code for each command and each line ended in memory with a pointer to the next line in sequence.
But this I got lost in…
> Most of the game is handled by the main program "OREGON TRAIL" (written by John Krenz), which loads in modules like "RIVER.LIB" and keeps access to all the variables.
AppleSoft basic didn’t have the concept of libraries. Were they just POKEing values in memory, loading the next AppleSoft Basic program calling them LIBs and the next one PEEKing to retrieve the values?
> If you want to modify anything else in the 16-bit address space, you first need to write a 16-bit pointer to the Zero Page containing the location you want, then use an instruction which references that pointer.
This is so completely wrong I question the person's ability to understand what's happening in the emulator.
Also, that LDA instruction reads the 2-byte pointer from the memory location, adds Y, and loads the accumulator from the resulting memory position. IIRC, the address + Y can't cross a page - Y is added to the least significant byte without a carry to the MSB.
> and the program is stored as some sort of bytecode
We call it "tokenized". Most BASIC interpreters did that to save space and to speed parsing the code (it makes zero sense to store the bytes of P, R, I, N, and T when you can store a single token for "PRINT".
> This is so completely wrong I question the person's ability to understand what's happening in the emulator.
I'd argue that it's not completely wrong in the context of a BASIC program; other addressing modes exist, but I don't think the BASIC interpreter will use self-modifying code to make LDA absolute work.
> IIRC, the address + Y can't cross a page - Y is added to the least significant byte without a carry to the MSB.
If wikipedia is accurate, the address + Y can cross a page boundary, but it will take an extra cycle --- the processor will read the address + Y % 256 first, and then read address + Y on the next cycle (on a 65C02 the discarded read address will be different). But if you JMP ($12FF), it will read the address from 12FF and 1200 on an 6502 and a 12FF and 1300 on a 65C02 --- that's probably what you're thinking of.
Thanks. It’s been 40+ years since I last programmed in 6502 assembly. I am rusty.
Yeah. I got as far as the zero-page description and stopped reading. The 6502 was the first microprocessor I learned (on the KIM-1) in 1979. All that zero page addressing offers is a faster way to access memory, because when using zero-page addressing modes, you only need one octet for the address instead of two. When using them on a 1MHz CPU with no cache, you've just saved many microseconds because you didn't need to fetch the high order address octet from program memory!
On the 6502, you can absolutely access all 64K of memory space with an LDA instruction.
The other weird thing about the 6502 is "page one", which is always the stack, and is limited in size to 256 bytes. The 256 byte limit can put a damper on your plans for doing recursive code, or even just placing lots of data on the stack.
I've done lots of embedded over the years, and the only other processor I've developed on that has something similar to the 6502 "zero page" memory was the Intel 8051, with it's "direct" and "indirect" memory access modes for the first 128 bytes of volatile memory (data, idata, bdata, xdata, pdata). What a PITA that can be!
> On the 6502, you can absolutely access all 64K of memory space with an LDA instruction.
There are two LDA instructions (maybe more, I too am about 40 years rusty). One loads from page 0 only and thus saves the time by only needing to read one byte of address, and the other reads two bytes of addresses and can read from all 64k. In latter years you had various bank switching schemes to handle more than 64k, but the CPU knew nothing about how that worked so I'll ignore them. Of course your assembler probably just called both LDA and used other clues to select which but it was a different CPU instruction.
There are multiple, but indirect indexed (pointer + offset) is only for page zero, but you have two of them, one for X and one for Y (I don't recall the differences anymore).
The 6502 was a really sweet little processor.
> All that zero page addressing offers is a faster way to access memory
I like to say if made me feel I had 256 general-purpose registers to play with.
> Specialist knowledge is for cowards
What a strange and thought provoking statement.
“A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.”
― Robert A. Heinlein
Great quote ... but: Warning from experience: do not try using that last part in a job application.
Or the first part.
I find it amusing that the bug in the final screen is essentially the Y2K bug.
In the dungeon crawling classic Wizardry, there was a cheat, if you used your bishop to try to 'I'dentify object in inventory slot 9 (there were only 8 slots) over and over, you'd get an 100,000,000 XP bonus. I believe it was unintentional bug.
Some early tiny computers were actually calculator chips. So, only floating point. And, implemented in BCD. So slow as f*k.
I remember a TI home computer, on a counter in a store. I type FOR I = 1 to 10000; NEXT I and RUN
It never finished. I changed it to 1000; still didn't finish. So I changed it FOR I=1 to 10 and PRINT I and it printed 1 and delay and 2 and delay and so on. Took ten seconds to finish.
TI BASIC has some unique implementational own goals, however, particularly with respect to system architecture and the middle-level interpreted language TI BASIC is written in. It really makes the 9900 CPU look unfairly bad.
Realistically, the river would have changed its course after 14272 years.
The geology of the PNW has been so unstable for millions of years ... that about once every century your riverside would likely either be flooded-out or buried in volcanic ash.
Can you pass the 14,000+ years by going hunting?
You'd have load up on ammo like it's a war wagon. And it's likely you'd blow you leg off at some point, yet still somehow die of dysentery.
The scarce hunting was a lie. Dysentery was a risk of speed. Rest and eat well, and it would go away. Wagons were limited, as were Oxen, clothes, food, wagon wheels. The only thing that could grow without limit was money, and there was the 'cook.' A provable way to get as high a score as you wanted. The true weakness was trading a clothes for an ox. $5 profit every time. If you hung back a few miles from the Dalles, you could trade as much as you want, back and forth, and then sell all your profits.
You could pass the 14,000+ years by hunting... and trading... at some point, all you need to do was trade...
I had not played the CDC-Cyber mainframe version, nor the Apple II version, I started with the Macintosh version, and having passed Econ with flying colors set my sight on 'cooking' the game, but the same 'cook' was available on the Apple ][ and the PC.
I would suspect that in 10,000 years... that Oxen in 500 generations would have been bred for extreme longevity, as well as the humans too.
Finally, the source code was published at May 1978, Creative Computing, page 137. So.... hack the source. ( CDC-Cyber Basic, I believe... ) for which "On CDC Cyber Basic, the size of an integer is typically 16 bits, meaning it can represent whole numbers ranging from -32,768 to 32,767."
Well if you tried this in real life your great...grandkids would now have modern cars to drive to the nearest store and buy more ammo anytime they want. Or they could just buy food from a modern grocery store.
> You'd have load up on ammo like it's a war wagon. And it's likely you'd blow your leg off at some point, yet still somehow die of dysentery.
XKCD says that's what happened in real life.[1] And that was for the people who made it to Oregon.
> If you're into retro computing, you probably know about Oregon Trail
Damn, that made me feel really old.
Lol. I played Oregon Trail as part of curriculum in elementary school in the mid 90s, I guess by then it was already a retro throwback.
EDIT: I played the 1985 version, I didn't know there was a text adventure.
Yesterday I was at a gaming parlor in SF and they had "Oregon Trail the card game". I sent my brother a picture. I couldn't my kids to even know why it was special.
[dead]
[dead]
It sold 65 million copies? You've got to be kidding me. Someone made 9 figures off of Oregon trail? Bwahahaha.