Комментарии:
I have a doubt, if x86 documentation is freely available for students to learn, then why x86 is closed ISA or x86 is proprietary??
Ответитьit comes the day who no one now how its work, but it works .. the infernal regress is comming, when the nature is not more in comparisation with technology
Ответить✨️🧠📈🔼
ОтветитьThank you for making this lecture freely available!
ОтветитьMega gut! Danke!
ОтветитьOther than setting flags, why would we ever want to write to same register twice in a row instead of just eliding the first instruction?
ОтветитьA System Programmer or System Designer must have knowledge of this topic but not Application Programmer or Software Engineer ( Application)
ОтветитьМасачусетс такие же поршивые врунишки, как и любые другие пройдохи!
ОтветитьC or C++ compiler have got such a level of efficiency that coding in assembler is no long needed, even to optimize code that might be time critical...
Developing control units in an automotive environment, I spent a lot of time with my customer,s which were among the most prominent European car makers, mainly executing code reviews.
Code reviews consisted of checks to verify that coding rules were met, and in details that no assembler code was used!!!
What the assemply language names
ОтветитьThanks for free lecture.
Ответить👏👏
Ответитьim learning assembly on z80 and now, after watching half of this course, thinking about moving to C and hand compiling required optimised assembly...this playlist is great stuff... sure alot of it i wont need.. but its nice to see not much has changed in the x86-64 world...
big thanks to MIT for sharing this stuff with us all :)
MOVE should be called ASSIGNMENT but they were afraid of the shorthand mnemonic
ОтветитьMy only experience in assembly is zachtronics exapunks game and it led me here and i understand most of it surprisingly well. To anyone who wants to learn assembly in a fun way, definately check the game out
ОтветитьDoes it relate to Operating system?
ОтветитьI used to teach Assembly Language, DOS, CPU Architecture and the instruction set timing and bus cycle operation for several systems. Including the DEC PDP 11, Intel MCS86 and MCS 51 and earlier. My class started with Hardware operation from the point of the first clock pulse from the release of the reset button. Including Machine Instruction and Bus cycles and how instruction execution controlled hardware operation. We also introduced Hardware Emulators. To support the class, I wrote a Text Book on the Theory and Operation of Microprocessor Hardware and Software. The Z-80 / 8080 CPUs were used as the example architecture. My class supported the NTX 2000 NATO Telephone Switching System Back in 1980 for NATO & ITT at the introduction of Fully Integrated Digital Switching Systems for Central Office Telephone Switches. basically, 256 Microprocessors operating in fully redundant pairs, all under the control of a custom minicomputer executing a Real Time, Multi-tasking Operating System.
ОтветитьGostei muito 🇧🇷
ОтветитьNO! where have i come i must leave until its too late noo
ОтветитьJust allow me to understand, the , O.S., etc.. .
ОтветитьJoyful to watch, even as an entertainment.
Assembly is very fun to do. Maybe a suicidal task if doing something for normal OS, but pure joy when there is none such thing as OS, no C, and starting barely from scratch - uC-s, for example!
Sir, I worked at a R&D corporation, young so inexperienced.
Sir, Teach me the "Quantum Computer ".
I have the intellect. Math = P's, Stats = Formulus solvingthe "Key"?
Teach Me, "Teacher = Master".
Assembly was heart 35? Ago.
It's so ?
the video nobody can dislike
ОтветитьI love his enthusiasm for the historical confusion X)
ОтветитьThis lesson was filmed in 2018, and if I recall correctly it was the time when Intel introduced AVX512 to its Skylake architecture; if MIT were to film this course again, I wonder if it would update it to an ARM assembly version, 🙂
ОтветитьAre there videos of him teaching this course?
ОтветитьCharles Eric Leiserson is a computer scientist, specializing in the theory of parallel computing and distributed computing, and particularly practical applications thereof. As part of this effort, he developed the Cilk multithreaded language. He invented the fat-tree interconnection network, a hardware-universal interconnection network used in many supercomputers, including the Connection Machine CM5, for which he was network architect. He helped pioneer the development of VLSI theory, including the retiming method of digital optimization with James B. Saxe and systolic arrays with H. T. Kung. He conceived of the notion of cache-oblivious algorithms, which are algorithms that have no tuning parameters for cache size or cache-line length, but nevertheless use cache near-optimally. He developed the Cilk language for multithreaded programming, which uses a provably good work-stealing algorithm for scheduling. Leiserson coauthored the standard algorithms textbook Introduction to Algorithms together with Thomas H. Cormen, Ronald L. Rivest, and Clifford Stein.
Leiserson received a B.S. degree in computer science and mathematics from Yale University in 1975 and a Ph.D. degree in computer science from Carnegie Mellon University in 1981, where his advisors were Jon Bentley and H. T. Kung.[2]
He then joined the faculty of the Massachusetts Institute of Technology, where he is now a professor. In addition, he is a principal in the Theory of Computation research group in the MIT Computer Science and Artificial Intelligence Laboratory, and he was formerly director of research and director of system architecture for Akamai Technologies. He was Founder and chief technology officer of Cilk Arts, Inc., a start-up that developed Cilk technology for multicore computing applications. (Cilk Arts, Inc. was acquired by Intel in 2009.)
Leiserson's dissertation, Area-Efficient VLSI Computation, won the first ACM Doctoral Dissertation Award. In 1985, the National Science Foundation awarded him a Presidential Young Investigator Award. He is a Fellow of the Association for Computing Machinery (ACM), the American Association for the Advancement of Science (AAAS), the Institute of Electrical and Electronics Engineers (IEEE), and the Society for Industrial and Applied Mathematics (SIAM). He received the 2014 Taylor L. Booth Education Award from the IEEE Computer Society "for worldwide computer science education impact through writing a best-selling algorithms textbook, and developing courses on algorithms and parallel programming." He received the 2014 ACM-IEEE Computer Society Ken Kennedy Award for his "enduring influence on parallel computing systems and their adoption into mainstream use through scholarly research and development." He was also cited for "distinguished mentoring of computer science leaders and students." He received the 2013 ACM Paris Kanellakis Theory and Practice Award for "contributions to robust parallel and distributed computing."
WIKIPEDIA
Please I learned computer architecture trough some books , but my understanding is not so clear for many concepts(interrupts, I/o modules...) ..
Please someone from MIT or anywhere can help with interesting ressources which are precises ...Even solved labs Can help me
I hated this course
ОтветитьProf. Leiserson is an amazing instructor. I love watching his lectures. He never rushes off with the material and always prioritizes quality over the quantity. Thank you Prof. Leiserson and MIT OCW🙏🏽
ОтветитьThank you for sharing and giving some insight into some of what our computers have of workload under the hood.
ОтветитьAssembly is a very nice programmation language but damn it is complicated ^^ I'm a french speaking programmer/teacher and I would give all my positive feedback for this lecture, lately, yes... I did not used the Google's subtitles to understand the instructor... that's great !!
ОтветитьDid YT forbid the publishing of a video posting date?
ОтветитьSo that’s what creed did after the office
ОтветитьThis lecture is so nostalgic for me and reminds me of my first programming assignments, literally in Assembler (albeit mainframe). I enjoyed it.
ОтветитьAt the other end of my career, I caught a glimpse of what was then called "micro code"...
An "instruction decode" cycle means that the instruction bits en-/dis-able circuitry of data signal pathways. (Eg: enable a shift register to multiple/divide by 2, or enable the block of 2-bit adders to sum bytes...) I envisioned this magic as a really complex railway switching yard. This is the coalface where machine code's 1s or 0s appear as 'high or low' electrical potentials.
Then came learning about micro code, the embedded multi-step gating/latching operations that would occur (synchronously) within one or a few clock cycles. Way back then, the hardware I saw didn't have advanced micro code or machine code or Assembly to multiply two integers; multiply was done with many Assembly instructions (usually a library 'routine')...
It helped when thinking how the effect of C's pre-/post-increment (decrement) instructions could be achieved, for instance.
I began with assembly in 1987 but sadly endet some lines of code later. I knew how allmighty it is, but that time I studied architecture. Today I sometimes think how helpful it would been all the passing time to have skills like that... This is the real cause why Mies van der Rohe said "Less is more" ;)
ОтветитьIt became more clear to me while I started doing digital electronics projects myself and what is amazing I come up to the ideas that someone has already invented but it is amazing sometimes to invent it myself! It is like exploring history or archeology and you find out what is going on with your computer.
ОтветитьGreat lecture. I’ve graduated 6+ years ago and I already knew most of the stuff, but I just wanted to say I really appreciate the lecture and really liked the professor’s attitude and presentation👌
ОтветитьThank you SO MUCH for putting this online!! I'm writing an assembler by myself to actually really understand Assembly and machine code, and this one is an eye-opener in so many ways for me
ОтветитьGreat video , great introduction.
Ответитьvery nice video about language and computer.
ОтветитьAh, that terrible at&t syntax 😁
ОтветитьWatch your umms.
ОтветитьStill changing the names of established barriers to optimization, I see. A resource contention that is causing a bottleneck is now called a Hazard. In C we could wait on the resource and set a counter to come and check again. In C++ we could set semaphore to flag us when the resource is available.The compiler optimizer should manage resource contention, and not middle ware or app code. In my book a Hazard begats errors, crashes, data loss , downtime, and many sleepless nights. Why overly dramatize the loss of a dozen or two cycles? If you can't afford another $50 G for more parallelism, build smarter compilers, linkers, and 1st pass code scanners. These can work wonders. Work with your CPU consultant at the chip vendor. They have great ideas and they don't get no respect. Talk with them and go with extra buckets for the mind dump you're gonna get. Some will even write the code for you.
Ответить