title
sequence
subreddit
sequence
post_id
sequence
score
sequence
link_flair_text
sequence
is_self
sequence
over_18
sequence
upvote_ratio
sequence
post_content
stringlengths
0
20.9k
C1
stringlengths
0
9.86k
C2
stringlengths
0
10k
C3
stringlengths
0
8.74k
C4
stringlengths
0
9.31k
C5
stringlengths
0
9.71k
[ "[crosspost] We are an international group of leading physicists (including many Nobel laureates) assembled here at Case Western Reserve University to celebrate 50 years of “the most successful theory known to humankind”… and explore what the next 50 years might hold! Ask us anything!" ]
[ "math" ]
[ "8nrxlz" ]
[ 208 ]
[ "" ]
[ true ]
[ false ]
[ 0.95 ]
Hi Reddit! In honor of the 50th anniversary of Steven Weinberg’s world-changing publication, A Model of Leptons, the work that solidified what we now call “The Standard Model of Physics”, Case Western Reserve University is hosting a once-in-lifetime symposium this weekend that features talks from many of the most famous names in physics… including 8 Nobelists and over 20 scientists who have made immeasurable contributions to the “the most successful theory known to humankind.” We’re here to honor this world-changing scientific work, but perhaps most important of all, look to the next 50 years of probing the deepest mysteries of the Universe… what incredible wonders might be out there waiting to be discovered? Are we on the verge of solving the great mysteries of Dark Matter and Dark Energy? Will we soon know exactly what happened in the very first moments of our Universe’s birth? And… could a working theory of Quantum Gravity finally be within reach? Proof: The talks will be live-streamed all weekend long : Science Writer-Filmmaker will be hanging out in the live stream chat box to translate the science in real time. But before we all get to work, we wanted to spend some time with you all! Ask us anything! Live AMA participants: Gerard will be answering questions specifically directed towards him. Conference Organizer. Distinguished Professor of Physics (Case Western Reserve University). Director of the Institute for the Science of Origins. Director of Center for Education and Research in Cosmology and Astrophysics. Research Questions: . . Professor of Physics Emeritus (MIT) (Fmr.) Director of the Laboratory for Nuclear Science and Head of the MIT Physics Department. Research Focus: . Professor (Berkeley Center for Cosmological Physics) Senior scientist at the Lawrence Berkeley National Laboratory Guest Star on The Big Bang Theory / Idol of Dr. Shelden Cooper Research Focus: Professor of Physics at University College London (UCL) Author of Smashing Physics Project Leader of the ATLAS “Standard Model Group" at the LHC at CERN Pioneered the first measurements of “Hadronic Jets” Winner of the Royal Society Wolfson Research Merit Award Winner of the Chadwick Medal Professor Emeritus of Particle Physics and Astrophysics SLAC National Accelerator Laboratory (Stanford University) Founder of "Peccei-Quinn theory" Current Focus: Winner of the Dirac Medal, the Klein Medal, Sakuri Prize, the Compton Medal, and the Benjamin Franklin Medal. Distinguished University Professor and Institute Professor (Case Western Reserve University) Leading pioneer of MRI, CT, PET, and medical radiation technology Incubated multiple research projects into full-scale technology companies Co-author of 10 patents. Research questions: Professor Emeritus of Physics (UC Berkeley) Pioneer of the ground-breaking discovery of the strong interaction corrections to weak transitions. Successfully predicted the mass of the charmed quark. Successfully predicted 3-jet events in high energy particle accelerators. Successfully predicted the mass of the b-quark. Made history as UC Berkely’s first female physicist to receive tenure. Research questions: Jon A. McCone Professor of High Energy Physics (CalTech) Discoverer of Heavy Quark Symmetry Winner of the 2001 Sakuri Prize Successfully predicted the decays of c and b flavored hadrons Science consultant to Marvel Studio's Iron Man 2 Research Questions: Professor Emeritus at the SLAC National Laboratory (Stanford University) Discoverer of “Bjorken Scaling” which successfully predicted quarks as physical objects. Winner of the Dirac Medal Winner of the Wolf Prize Winner of the EPS High Energy Physics Prize Author of the seminal Relativistic Quantum Fields and Relativistic Quantum Mechanics Research Questions: Professor (Case Western Reserve University) Pioneer of ground-based observational techniques to study high-energy cosmic radiation Research questions: Professor (Case Western Reserve University) Leading researcher of quantum manybody physics, Particle Astrophysics and Cosmology. Expert on deep mathematics inherent in modern art Expert on the statistical physics inherent to evolution of human language Assistant Professor (Case Western Reserve University) Expert of physics theories beyond the standard model Assistant Professor (Case Western Reserve University) Expert on early Universe cosmology Expert on modified and alternative gravity theories Ephraim Gildor Professorship of Computational Theoretical Physics (Columbia University) Pioneer of the groundbreaking LatticeQCD approach to simulating strong interactions Winner of the Gordon Bell Prize Developmental leader of IBM’s QCDOC Super Computer project to achieve 10Tflops. Associate Professor (Case Western Reserve University) Expert Neutrino hunter Pioneer of cyclotron radiation electron spectroscopy Expert on next-generation neutrino detectors Co-creator of “A Light in the Void” science symphony concert with composer Austin Wintory Writer-Director for “Through the Wormhole: With Morgan-Freeman” Co-Executive Producer of “National Geographic: Breakthrough”
Can you guys prove for me that the core of the Sun does not spin, please? It'll settle a long standing bet I have with myself.
Why is notation in physics so atrocious? Why did anyone ever criticize quantized gravity? Can Cooper pairs be formed by groups of more than two particles?
You gotta click the link above to get to the AMA :)
It rotates.
You should click the link above to visit the AMA :)
[ "Simple Questions - June 01, 2018" ]
[ "math" ]
[ "8nt491" ]
[ 20 ]
[ "" ]
[ true ]
[ false ]
[ 0.84 ]
This recurring thread will be for questions that might not warrant their own thread. We would like to see more conceptual-based questions posted in this thread, rather than "what is the answer to this problem?". For example, here are some kinds of questions that we'd like to see in this thread: Can someone explain the concept of maпifolds to me? What are the applications of Represeпtation Theory? What's a good starter book for Numerical Aпalysis? What can I do to prepare for college/grad school/getting a job? Including a brief description of your mathematical background and the context for your question can help others give you an appropriate answer.
Assuming all the events are independent, you cannot guarantee success after a finite number of attempts. The probability of zero successes after n attempts is 0.99 , and that's never zero for any n.
Interesting question! The effect you have noticed is real. If you choose a real number x uniformly (say, between 0 and 2*pi), then sin(x) is more likely to be near 1 or -1 than it is to be near 0. For a concrete result, notice that sin(x) is more than 1/2 when x is between pi/6 and 5pi/6, and this happens with probability 1/3. So sin(x) is a random number between -1 and 1, but there's a 1/3 chance that it's at least 1/2. Similarly, there's a 1/3 chance that it's less than -1/2, and only a 1/3 chance that it's between -1/2 and 1/2. (A word on notation - I would say that sin(x) is random but not uniform)
Consider the function f:[0,1) --> S defined by f(t) = e . (I'm thinking of S as the unit circle inside C).
No, this statement has nothing to do with the axiom of choice and you can prove it quite straightforwardly without it. For instance, suppose f:A-->B is bijective. Then for any b in B, there is an a in A with f(a) = b (since f is surjective). Since f is injective, this a is unique. Hence we obtain a well defined function g:B-->A by setting g(b) = a. It should be clear that this is an inverse to f. Showing existence of an inverse implies bijective is similarly straightforward. The axiom of choice only enters the picture when we are trying to show that surjective functions have a right inverse. To see this let's try to follow the proof above: Suppose f:A-->B is surjective. Take any b in B. Now what we want to do is pick an a such that f(a) = b. Such an a exists for sure since f is surjective, but will no longer necessarily be unique since we are no longer assuming f is injective. Hence we actually need to use the axiom of choice here to pick an such an a in order to construct an inverse. edit: And this latter statement is actually equivalent to the axiom of choice.
There's lots of little ways (many of which are closely related) in which it behaves differently than other primes, which can cause all kinds of issues in number theory. To name a few (I'm sure I'm forgetting to mention some things): One big one is the fact that [;(\mathbb{Z}/p^n\mathbb{Z})^\times\cong \mathbb{Z}/(p-1)p^{n-1}\mathbb{Z};] if [;p;] is odd, but if [;p=2;] then [;(\mathbb{Z}/2^n\mathbb{Z})^\times\cong (\mathbb{Z}/2\mathbb{Z})\times(\mathbb{Z}/2^{n-2}\mathbb{Z});] . This makes the 2-adic numbers behave a little differently than the [;p;]-adics for [;p>2;]. Specifically, [;(1+p\mathbb{Z}_p,\times) \cong (\mathbb{Z}_p,+);] for [;p>2;] but [;(1+2\mathbb{Z}_2,\times) \cong (\mathbb{Z}/2\mathbb{Z})\times(\mathbb{Z}_2,+);] . Since the groups [;(1+p\mathbb{Z}_p,\times) ;] are closely related to the ramified extensions of [;\mathbb{Q}_p;] (specifically to the wildly ramified abelian extensions) via local class field theory this just generally means that ramification at 2 can be more complicated than ramification at other primes. Relatedly, the unit group of [;\mathbb{Z};] is [;\{\pm 1\}\cong \mathbb{Z}/2\mathbb{Z};] , which is divisible only by 2. This means you can usually ignore this group when working at an odd prime, but not when working at 2. This means you often get extra factors of 2 cropping in number theory formulas. The absolute Galois group of [;\mathbb{R};] is [;\operatorname{Gal}(\mathbb{C}/\mathbb{R})\cong \mathbb{Z}/2\mathbb{Z};]. This means you can sometimes get away with ignoring what happens at the infinite places of a number field when you are working with a odd prime, but not when you are working with [;2;]. You can tell the difference between [;1;] and [;-1;] mod [;p;] for any odd [;p;], but not mod [;2;]. Somewhat relatedly, there are situations where things can behave somewhat more complicatedly for when something is either [;1\pmod{p};] of [;-1\pmod{p};]. When [;p=2;], both of these situations can happen simultaneously, which makes things even harder. In addition to all of this, a lot of the simple examples of number theory involve things that are implicitly or explicitly related to [;2;]. For example, quadratic reciprocity involves quadratic polynomials, which behave quite differently mod [;2;] than they do mod [;p;] for [;p>2;]. This issue sometimes goes away (and is replaced by a different set of primes behaving strangely) when you look at certain generalizations (e.g. class field theory). There's also less obvious examples of number theoretic objects implicitly being related to 2. For example, elliptic curves and modular forms are part of the Langlands correspondence for [;\operatorname{GL}_2;], which means that they can sometimes behave unexpectedly at 2 as well.
[ "what is a good blogging site that one can post his notes and other things on mathematics and others see and leave comments?" ]
[ "math" ]
[ "8nrauv" ]
[ 13 ]
[ "Removed - ask in Simple Questions thread" ]
[ true ]
[ false ]
[ 0.76 ]
null
Maybe medium? https://medium.com/topic/math
I've never been on Medium much before, except when Reddit links to articles there. Anyway, one of the first things that I stumbled upon was a discussion of cumulative advantage, i.e. the Matthew effect. From the article We like to think in America that most things come down to hard work, but a few lucky (or unlucky) breaks early on can have lasting effects over decades. If we look at luck in this way, it can change the way you view your life Reminds me of this video. A scathing critique of Home Depot founder's praising of both luck (!) and capitalism.
Does Medium have a good way of including LaTeX?
For math explicitly, I doubt anything like that really exists unless you generate interest (like Terry Tao's blog does). You could make your own subreddit or blog page and advertise it with a post here (I'm fairly sure that's within community guidelines), but ultimately if you're looking for comments and discussion, you're limited by the interest of those who come across it. I don't think that sort of forum exists right now.
Reddit. The arXiv.
[ "Combinations of Sweets" ]
[ "math" ]
[ "8nqnvw" ]
[ 1 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.66 ]
null
Since there is at least 1 of each color we can remove one of each to simplify the calculations. Thus we have 11 sweets that can be in three different colors. Let's order the sweet with all in one color first then all of another and so on. And let's place a little wall between the colors. Now we have 13 objects (11 sweets and two walls). How many there are of each color is completely determined by where the walls are, so we just need to figure out how many different ways the wall could have ended up. This means the answer is 13 choose 2 = 13C2 = 13!/2!(13-2)! And your friend has a 2/13*12 = 1/78 chance of guessing right. Edit: so in general if you choose n things that can be in k different states the number of arrangements that can be made is (n+k-1)C(k-1)
I see how I made that confusing. The idea is that if you move the walls you change the color of the sweets. The sweets left of wall1 has one color, the ones between wall 1 and wall 2 have another, and those to the right of wall 2 have a third. Does that make sense? I'm basically rephrasing the problem in terms of walls instead of colors.
Thank you. I'm a little unclear on the function of the "walls" though as they don't change state. Edit: Forgot to check my working.
Gotcha.
Check out the Schaum’s Outlines. Pick your poison....
[ "Any good books or websites full of math problems to solve?" ]
[ "math" ]
[ "8npmyz" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.88 ]
I'd love it if there were a series of books full of maths problems that go from basic (long division, long multiplication etc) to really advanced stuff that you'd do in university. I dont know if theres such thing as that but thats what I'm looking for Alternatively a website with the same stuff would be good too but i like books more Edit: wtf why is this so hard to find? I'd think there'd definitely be at least something....
I quite enjoyed Professor Povey's Perplexing Problems. Has Maths and Physics problems (still mainly mechanics though) and lots of good probability problems.
https://imaginary.org/sites/default/files/taskbook_arnold_en_0.pdf Edit: wtf why is this so hard to find? There are thousands of books that might fit your criteria. A web search turns up tons of relevant stuff. What part in particular is hard to find?
It isn't really what you requested, but the Moscow Puzzles are quite fun.
Try going to Math Counts or other math competition websites for practice problems.
Project Euler maybe? But it might be to advanced for what you are looking for.
[ "Yet another question on frequentists and Bayesians" ]
[ "math" ]
[ "8npkqx" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.78 ]
First, let me take a brief detour through quantum mechanics. There you have a formalism (Dirac's) for calculating and predicting the outcomes of experiments. This uses Hilbert spaces, operators, wave functions, and the Schrödinger equation. Adhering only to the formalism and asking no further questions labels you as a "shut up and calculate"-physicist. Many people, however, add onto this an interpretation (Copenhagen, pilot wave, many worlds, ...) of what the wave function actually represents in the physical world. Currently one valid interpretation cannot be proven over any other because all must include Dirac's formalism and predict the same experimental results. It seems to me there is a similar phenomenon in probability theory. On one hand, there are Kolmogorov's axioms (using measure theory) which you can use to calculate probabilities. On the other hand, there are two popular interpretations (frequentist and Bayesian) of the these probabilities have in the real world. It then seems to me these two interpretations must attach meaning to the constructs appearing in Kolmogorov's formalism. The reason I ask these questions is that, whenever I try to look this up, Bayesian probability is often presented as an underdog alternative view to the frequentist reasoning we've all been taught. But when I look at my courses in probability we started from Kolmogorov, defined random variables and calculated transformations and convergence. Then, in a course on statistical inference, we used these results to define point estimators, confidence intervals and hypothesis tests in a rigorous way. I tried looking over these notes and I cannot find a place where philosophy entered the picture. If there is some philosophy to be found there I would like to know where the math ends and where the interpretation starts. I think this will strengthen my intuition in both areas.
I appreciate your detour, but I don't think it's as similar as you put forward here. Both are based on Kolmogorov's axioms, but (sometimes) do different calculations to answer different questions that in the end attempts at answering a similar overall question. define point estimators While bayesian stats can be done analytically in some cases, it's often the cases that the models we're interested can only be computed by sampling. As a consequence, (and this is a good selling point in itself,) we get estimates of the full distribution. When we already have the full distribution, we can just take the mean / median / maximum a posteriori of whatever we want a point estimate of. confidence intervals and hypothesis tests Here's where the philosophy enters. Confidence intervals and classical hypothesis tests are based on frequentism (the long-run frequency of test statistic >= threshold converges to alpha under X assumptions) and are ways of getting at a probability p(model | data) when all you have is the likelihood p(data | model) . Bayesians calculate p(model | data) directly from the posterior.
AP Stats is definitely more frequentist than Bayesian
p(model)=1 for that experiment I might be misunderstanding you, but no. If that was the case, you could do the same data collection, test a different model and have two models with 100% probability each => 200% probability total. That's not allowed for probability. Or think about it in a different way: there would be no need to collect any data, ever. Usually, the process considered includes the data collection (the data is a random variable), and p(model | data) (really a shorthand for something like p(parameter >= threshold | model, data) ) is for any particular model. What you do is calculate the (long-run, frequentist) probability of making a type-1 error.
... Confidence intervals and classical hypothesis tests are based on frequentism ... are ways of getting at a probability p(model | data) ... In a frequentist interpretation the probability is a limit of a repeated process, so p(model | data) is limit rate of some process producing a model provided the sample data is the data that was observed. So what's that process? Naively, if the goal of an experiment is to test a particular model, then p(model)=1 for that experiment so p(model | data) = 1 for repeated experiments, right?
... p(model | data) (really a shorthand for something like p(parameter >= threshold | model, data)) Ah, I was confused by that shorthand, since the model is generally not treated as the result of a random process. "p(parameter >= threshold | model, data)" still doesn't really make sense to me, but it makes a whole lot more sense than having the model on the left of the "|".
[ "Proof-Based MOOCs?" ]
[ "math" ]
[ "8np76a" ]
[ 16 ]
[ "" ]
[ true ]
[ false ]
[ 0.9 ]
Hey all, I have some free time this summer I wish to spend supplementing my studies. While I have found many interesting courses on edX and Coursera (such as one on complex analysis), and while they do mention being proof-based (at least in lectures), are the examinations as well? I ask, mostly, as I am unsure of how they would be able to test such material.
Notes/HWs and lectures for abstract algebra taught by Benedict Gross at Harvard.
EdX has a linear algebra course called LAFF that does its best to be proof based. At the end of the day though it’s tough to pull off in that environment and the questions boil down to guessing if something is never/always/sometimes true. Still a good course though if you’re interested in implementing LA algorithms in MATLAB. Coursera has one called intro to mathematical thinking. These tests have proofs and ask you to spot the flaws. It’s not the same, but probably as good as you’re going to get in the MOOC setting.
There's http://nptel.ac.in/course.php It's a collection of online courses from some pretty good universities in India, all in English. I used their mathematical stats for the first half a semester because my lecturer was useless.
The Galois theory course on coursera is proof-based. The lectures consist mostly of proofs being presented. In most weeks, you just have multiple choice questions, but there are some assignments where you have to write proofs, too (these are peer-graded) in addition to some optional ungraded exercise. The course goes deeper than the usual first encounter with Galois theory in an undergrad course. The only other place where I've seen the characterization of separability in terms of tensor products of algebras was in the context of étale algebras. In the end, there's also a tiny bit of what can be considered algebraic number theory, e.g. a proof that number fields have integral bases. Also I hadn't seen the normal basis theorem until we proved it in Galois cohomology to show the additive version of Hilbert's Satz 90. It can be bit dry, but if you're interested in abstract algebra, I recommend it.
I enjoy testing environments - it sort of keeps me in tune for the Summer months so it won't be in much of a shock when I return to classes in the fall.
[ "More efficient way to calculate a satisfying model for a 3CNF propositional formula" ]
[ "math" ]
[ "8noj51" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.5 ]
I've been working on a way to deal with logic models more easily and I came up with this solution. Because this is an NP-complete problem, I feel like I missed something, but I can't find any errors in my math or logic. Please let me know if you see a mistake! The algorithm to calculate the model works by breaking down the network of constraints of a CNF Boolean formula, with exactly 3 literals per clause, then recombining them to see if there is any assignment possible. The formula is broken down using a table like the one below. Each column represents one way to group 2 out of 3 variables from a clause, and each row is one possible way to assign those two variables. *It must be noted that variables and literals are taken to be two separate things. A variable is a value that is assigned as either 0 or 1. A literal is a value that is evaluated based on what its variable was assigned. Variables will be capitalized and literals italicized to reinforce the distinction. Each cell represents us assigning the two variables of the column we're in, with the values from the row we are in, respectively. Because we formed the columns based on a grouping of two literals from our clauses, one of the assignments will make both of them evaluate to false. In the cell corresponding to this particular assignment, we put the third literal, from the clause. This is to let us know that there is some clause that depends on this literal evaluating to true so that the clause remains satisfied. Take a moment to really internalize what that means as it's central to all of this. Here's an example with a single clause: Looking at this, we know if X=0 and Y=1, then the clause can only be satisfied if the variable Z=1 because the literal must hold. Similarly, if the variables X=0 and Z=0, then Y=0 for the clause to be satisfied because must hold. Since we only used one clause, it's not apparent how useful this presentation of the formula could be. Let's look at a bigger example. With this bigger example, it's easier to see how this table lets us see interactions between clauses. Since a clause can only drop one literal in a cell in any particular column, multiple literals in a cell are caused by overlapping clausal constraints. For example, if we assign A=0 and B=0, then we can see that there is a clause that will be unsatisfied unless holds and another that'll be unsatisfied unless holds. Therefore, if we want to assign A=0 and B=0, then we must also assign C=1 and E=1 to keep our clauses all satisfied. However, now that we have these 2 new variable assignments, we have to make sure these 4 all hold together. So lets look up the new combination A=0 and C=1 in our table. In the cell we have , so we know B must equal 0. Well we've already decided B=0 for this group, so we don't have to add anything. For A=0 and E=1, there is nothing in the cell, therefore no requirements for what we have to assign, so we don't add anything. Next we try B=0 and C=1. This gives us . Now we can add D=1 to the group (A=0 B=0 C=1 D=1 E=1). This is a total assignment to all the variables, but we still need to check that they all work together. Next we try B=0 and D=1, which has nothing in its cell, so we go to B=0 and E=1 which also has nothing in the cell. Moving on to C=1 and D=1, we get and . Here is where a problem happens. For to hold, A must equal 0, which is fine since that's what we've already assumed. However, for to hold, E must equal 0. Well how can E=0 when we've already assigned it 1? It can't as it can't simultaneously be both values. This means that there is a clause that is going to be left unsatisfied and this assignment to the variables isn't valid. Keeping track of these invalid assignments is important because we want to know if all possible ways to assign some clause’s variables aren’t valid. To do this, we construct a similar, but expanded table. This table has a column for the unique grouping of 3 variables specified by any clause. There are 8 columns for each of the ways you could assign the 3 variables. The key property we are capitalizing on here is that any unsatisfiable formula will have at least one column in which all 8 rows contain a contradiction like the one we worked through above. To fill in a cell, we run through the same process as above. We keep filling up the cell with the held literals until we either reach some contradictory assignment or a stable* set of assignments with no contradictions. If we find a contradiction, we can stop and move on to the next cell. If we find a total assignment, meaning we were guided to assign all the variables one way or the other, then we have found a satisfying assignment as well as deducing our formula is satisfiable. If we get to the end of a column and there was no way to assign any of our 3 variables without a contradiction arising, then we know our formula is unsatisfiable. *It is possible to only get a partial stable assignment. This happens when our initial table has empty cells, meaning there were no constraints on our specific grouping. If we get to the point where finding constraints for groups isn't returning any literals that need to be held, we might only be able to specify some subset of assignments, leading to a hole in our table. This actually happens at (CDE = 010). CD=01, CE=00, and DE=10 have nothing in their cells, so we have no clue what to assign them without just guessing. We want to refrain from doing this, so we use a little trick. First we find an overlapping column (a column with at least 1 variable that is the same as some variable in our column). We'll use the column BCD for this example. We already have an assignment we are looking at for CD, so we transfer that over to BCD, however, B could either be assigned 0 or 1. Therefore, we need to check both corresponding cells BCD=001 and BCD=101. If both of these cells have been crossed off for containing contradictions, we know our hole can be filled in as another contradictory assignment. This is because if we go with our hole's assignment, CDE=010, then that means BCD could only be assigned 001 or 101. If those are both invalid, then our hole is invalid. Otherwise, if any of those two are themselves holes, we can add the holes together and go back to checking if each group of two can fit. (This means the table has to be completed first before holes can be filled) Complexity analysis: Assuming each clause has exactly 3 literals, there will be 3 ways to choose 2 of them. In terms of constructing the initial table, this means each clause can only add at most 3 columns. It could be less because a column might already exist from another clause. For filling in the table, each clause will contribute exactly 3 literals that need to be held. Altogether, each clause would require, at most, 6 units of work. 1 unit to add a column and 1 to fill in a cell, this process 3 different times. In terms of , or total variable count, we end up recording every way to choose 2 variables from a set of , or . This simplifies to . No matter the , this will always be less than . There is also a constant factor of 4 to account for each way to assign any group of those 2 variables. All-in-all, this means that the initial table can always be constructed and filled in polynomial time and space. The second table will have a column for each clause that contains a unique set of 3 variables. So in the worst case for any variables, there could be columns. This simplifies to . No matter the , this will always be less than . There is a constant factor of 8 for each of the ways to assign the set of 3 variables. Creating the table can be done in polynomial time and space. Filling it in is a little more complicated. Each cell requires you to look through, at most, combinations, which as we've shown before, is less than amount of work. While sorting through those combinations, either we will find a contradictory assignment and be able to move on to the next cell, or we will find a stable assignment. Contradictory and total stable assignments will always be found within the initial search, while partial stable assignments on the other hand, require some extra work. Partial stable assignments require us to query the expanded table to fill in the holes. Either our query will allow us to fill in the hole as invalid, or our query will return another hole which we combine with the first, then start the process again. In the worst case, you will have to look through all cells to find a contradiction, but if one isn’t found after looking through the entire table, then we know the formula is satisfiable, and adding our holes together will give us that total assignment. Altogether, this would mean that filling in the expanded table and finding a satisfying assignment, or lack thereof, would happen in polynomial time. Grammatical fix
If you don't backtrack it will be pretty easy to find a counterexample by just running enough random instances of the problem. If you backtrack, then complexity cannot be polynomial (at least not with what you showed here)
Your approach isn't complicated at all. Essentially you just rewrite each x or y or z into several (not x) and (not y) => z clauses. Each table entry is just one or a few such clauses, and the "trying" strategy is still naively finding one applicable clause and deduce something, and repeat the process. It seems there's no real enlightening ideas, so smart people in the history may have already proven it this way already if it works. Due to above reason, everyone expects the approach not to work, so people are not willing to waste time on this approach. I'm among them. There's no hope proving this algorithm works because it's not likely to work. But since you asked, if you want to prove your algorithm work, one key question to answer is, if it is satisfiable, how do you your process will definitely reach a solution, especially when initial guesses are totally wrong? "If my process says it's unsatisfiable then it's unsatisfiable" is a recipe for existence of counterexample.
Code it up and try to beat survey propagation on a large problem. In the very unlikely event this works, post the code to github with some runtime stats and ask here again. Much more likely profiling your code will answer why your approach breaks
There is no backtracking, so it keeps this from combinatorially exploding there. That’s what everyone’s been telling me, “if you throw enough at it, I’m sure there’ll be a counterexample.” No one has been able to actually show a flaw in my logic that would allow a counter example to exist. That’s the holy grail I’m looking for, still I really do appreciate your feedback! On the other hand, any tips on how to more strongly prove this algorithm actually does always works?
It's you who are underestimating the difficulty of the problem. Unless you can prove that your algorithm is not missing a solution in any case (i.e. means your algorithm wouldn't report opposite result on satisfiable CNF). Otherwise we don't care complexity on a wrong algorithm - in the problem P vs NP, algorithm has to return correct result all the time. Quote from Carl Sagan: "Extraordinary claims require extraordinary evidence". The extraordinary evidence is what we fail to find here.
[ "Gergely Szűcs and Søren Galatius, \"The Equivariant Cobordism Category\" [1805.12342]" ]
[ "math" ]
[ "8np23k" ]
[ 4 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
null
Sure, there are a few motivations that I can think of, but to me it's interesting for its own sake, and I thought about motivations afterwards. I've been curious what the correct genuine equivariant analogues of principal bundles, tangential structures, cobordism, the Pontrjagin-Thom map, and Thom spectra are, and this paper had something to say about all of that. This is related to other phenomena in equivariant homotopy theory (e.g. one major reason behind the somewhat bizarre-looking genuine equivariant stable homotopy category is motivated by the desire to have duals of all -orbits, and this more or less follows from the equivariant version of the Pontrjagin-Thom theorem). There's also plenty to say about TQFT. The original GMTW paper was an important step in the classification of invertible TQFTs as homotopy classes of maps out of Madsen-Tillmann spectra (though it didn't completely settle the question). From a physics perspective, that kind of result was about invertible systems with symmetries that don't move spacetime, such as those associated with a principal -bundle, or various tangential structures ("fermion parity" apparently corresponds to a spin structure, etc.). However, there is a lot of research into physical systems with a symmetry that moves spacetime, e.g. reflection or translation. There seems to be an ansatz, justified by physics arguments I can't evaluate, that for the purposes of classification, such a -symmetry (maybe only in the unitary case) is the same thing as a background symmetry for the same symmetry group (i.e. additional data of a principal -bundle). This new result of Szűcs-Galatius could offer a way to test that. (Warning: possibly-wrong speculation ahead.) It seems to me that the proper way to formulate a physical system with a -symmetry that moves spacetime is to place it on a category of manifolds with a -action. In the case of invertible TQFTs, one could possibly use this result to try to verify the physicists' claim; presumably you could do that by showing that the relevant homotopy group of the -fixed points of this genuine equivariant Madsen-Tillmann spectrum is isomorphic to the relevant homotopy group of the nonequivariant Madsen-Tillmann spectrum smash . However, the proper framework for classification appears to be fully extended invertible TQFTs, which is not seen by the category considered in this paper. There are probably other applications of this stuff. The original GMTW result, along with previous work of Madsen-Tillmann and Madsen-Weiss, addressed the Mumford conjecture and related questions by (I think) Segal that came out of older work on the mathematical foundation of conformal field theory (CFT). Perhaps this new result is useful for CFTs with a spacetime-moving symmetry given by a finite group. For = /2 this might also be useful for studying things like reflection positivity; Freed-Hopkins use naïve equivariant homotopy theory to study invertible TQFTs with reflection positivity.
I think I can sort of understand the abstract but Im not familiar with this area so I cant really see what are the implications or connections with anything else. Do you think this is somewhat explainable? I understand (some of?) the motivation comes from TQFT's and the nlab has convinced me that, for example, the cobordism hypothesis is important and interesting, but I can't quite connect this with that.
Rather than TQFTs or the cobordism hypothesis, these authors are primarily interested in understanding diffeomorphism groups of manifolds, specifically homological stability results. You can see how this works in the cited paper GMTW09 or the pioneering paper MW02 , where the non-equivariant story is told.
Was thinking of the same thing for [;Z_2;] and reflection positivity. The [; M = \bigsqcup B \Sigma_A ;] doesn't look to bad to actually compute concretely.
You're welcome! Thanks for your interest in this stuff!
[ "Are Princeton Lectures in Analysis a good idea to use as an introduction to analysis courses?" ]
[ "math" ]
[ "8nobj9" ]
[ 5 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
I'm planning on taking an Intro to Real Analysis class in the fall along with Fourier and wavelet analysis and I am thinking of purchasing these books to study over the summer. Is it a good place to start?
The first book starts with Fourier analysis rigorously, but I would recommend starting analysis the traditional way (construction of , sequences, series, continuity, basic integration etc), and then moving on to Fourier Analysis, etc. Maybe since your Fourier class does not have an analysis prereq, it is more of an applied class? I would recommend Stephen Abbot's , or Baby Rudin (Especially good when paired with Francis Su's lectures on Youtube)
Err, definitely not. All the books presume you’ve had a first course in analysis.
No. especially because it appears that you are not at top 10, 20 university, so a first course in real analysis would not be nearly demanding as any of the Stein-Shakarchi books from Princeton. Abbott's is an excellent and mild introduction; but of course, if you wanna impress and stand out from your peers, you can probably give Baby Rudin a try. It's a little terse but it's a classic.
Personally I like Abbott’s explanations better than any other analysis book, but Rudin has more interesting (and difficult) exercises. If you just want to get a basis, Abbott will probably more than suffice, but if you’re up for a challenge then Rudin (“like drinking from a firehouse”) is the way to go. Good luck!
Weren't they written as a first sequence for undergraduates at Princeton? Obviously they're maybe more on a graduate level, but they are books meant for undergraduates.
[ "What is the largest number you know, and can you please explain it in layman's terms?" ]
[ "math" ]
[ "8npckp" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.42 ]
I know Graham's number is very large, but there are much larger numbers that I'd love to know. I have a fascination with huge numbers, because they blow my mind. What's a huge number that you've come across that you can explain to the general audience? Edit: a number that is used in a proof, not something arbitrary like Graham + 1
Well if they're busy they probably won't show up.
It might be more illuminating to consider fast-growing functions defined on than to consider any given natural. Rather than discuss Graham's Number, we can consider Knuth's up-arrow notation. Rather than discuss BB(4), we can consider the Busy Beaver function. Rather than discuss TREE(3), we can consider the Tree function. Rather than discuss Rayo's Number, we can consider finite ordinals that are definable in a first-order formulation of ZFC in finitely many symbols (although I don't think Rayo's Number was ever used in a proof, just a googology battle).
Shannon's number, 10 , is a conservative estimate on the number of possible chess games.
The largest numbers I know of that are used in proofs come from the field of reverse mathematics: how little math can you get away with and still prove some proposition? The mathematician Harvey Friedman came up with some propositions that depend on very powerful systems (ZFC+LCA) to prove them, and most of those are associated with some very large numbers. Greedy Clique Sequences are the strongest ones I know off the top of my head, which get very large very fast. The largest ones I can pretend to understand are hydra numbers. Kirby and Paris came up with a hydra-slaying game that can always be won, but only in a shockingly large number of moves. Buchholz extended the game to even more powerful hydras, which take even longer to kill. Hydra games are really fun to play with, and I've been enjoying trying to extend Buchholz's game even farther. I recommend reading about both of them on the googology wiki, and seeing what you can come up with. I really should get around to writing that layman explanation of Hydra games...
Here are some big numbers: Ramsey number Skewes' number hyperoperation Knuth's up arrow notation Conway chained arrow notation Graham's number Ackermann number Fuse number TREE(3) SSCG(3) Busy Beaver number Rayo's number Those are all finite numbers, at least. There's also a long list of really large (infinite) cardinals. Save that for another post though.
[ "Respect the parallelogram - brutal take down of a stupid math sign" ]
[ "math" ]
[ "6xaqks" ]
[ 17 ]
[ "" ]
[ true ]
[ false ]
[ 0.75 ]
null
I swear some of these people could probably be given a key, stood in front of a door, and be handed a piece of paper that says "the key you were given works with the door and behind it are riches beyond your wildest imagination", and they would spend the entire time complaining about how they never received any instruction about how doors, keys, or riches work. Edit: also the door is ajar.
They could have a god damn diagram of how keys work, and they'd still complain that nobody taught them how to use the key. First, unless you're rich -- at which point you should hire an accountant -- taxes are stupidly simple. They're made so that every person can do them. They're simple to the point that the majority of it is being told "take the number on line 19 and write it on this line." Second, I learned how to do taxes in classes in high school (one of which was required) and I still hear people I went to school with complain that they didn't learn how to do them because they thought the class was (rightfully, this is why we don't teach these things in school) a blowoff class. When you wrote a report on George Washington in fourth grade, it wasn't because your teacher wanted to learn about valley forge. It was to teach you to gather some information yourself: school can't cover everything, and you don't want it to cover everything.
First, unless you're rich -- at which point you should hire an accountant -- taxes are stupidly simple. It’s more like if you have a salaried job, few dependents, and no other financial dealings taxes are stupidly simple. If you are a private contractor, own your own business, take stock from a startup, live outside the USA, etc. taxes can get quite tricky, or at any rate be a huge pain in the butt gathering documentation and filling out a pile of forms. There are many “not rich” people whose taxes take at least several hours to handle. Taxes could be made a lot stupidly simpler if they were allowed to be pre-filled with information the IRS has already collected elsewhere, but the tax preparation industry lobbies very heavily against any change which would make taxes easier for individuals to deal with themselves.
I would like some instruction about how riches work.
That's fair. I inaccurately equated complicated income with being rich.
[ "Does this method work?" ]
[ "math" ]
[ "6x6lt8" ]
[ 1 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 1 ]
null
But he didn't say he was looking for the limit of f(x,y) as (x,y) -> 0. He said he was looking for the limit of f(x,y) as xy -> 0, and it is absolutely fine to do the substitution that he described to find that limit. The fact that the function has multiple variables is irrelevant if you're just interested in its behavior along a 1-dimensional curve.
Try r/mathhelp or r/learnmath or /r/cheatatmathhomework
you're right. i didn't read carefully enough.
Thanks for the help:)
no that's not the limit of the function. even if that limit u to 0 exists it doesn't mean f(v -> 0) gets where v is the vector (x, y).
[ "Via /u/paolog, a list on Math.SE of \"'[o]bvious' theorems that are actually false\"" ]
[ "math" ]
[ "6x6tzz" ]
[ 39 ]
[ "" ]
[ true ]
[ false ]
[ 0.93 ]
null
Surprised not to see "If all partial derivatives of a function exist at x, then the function is differentiable at x."
I feel this statement needs enough knowledge to understand that almost everyone who understands it knows that it doesn't hold.
I don't think that this is obvious in the context in which it fails. In fact I think it is fairly clearly false if you understand what a topology is and what a limit is.
[I] have to wonder what's the [o]bsession with ensuring that all [w]ords are correctly [c]apitalized.
Previously submitted and discussed here three years ago , but i felt it worth resubmitting for those who didn't see it then, to demonstrate why we need rigorous proofs in maths.
[ "Go Yamashita has published a 300 page summary on Mochizuki's Inter-universal Teicmuller theory." ]
[ "math" ]
[ "6x9wsw" ]
[ 174 ]
[ "PDF" ]
[ true ]
[ false ]
[ 0.94 ]
null
That's probably because of the Greek letters.
Footnote on pg 6: The author hears that a mathematician (I. F.), who pretends to understand inter-universal Teichmüller theory, suggests in a literature that the author began to study inter-universal Teichmüller theory “by his encouragement”. But, this differs from the fact that the author began it by his own will. The same person, in other context as well, modified the author’s email with quotation symbol “>” and fabricated an email, seemingly with ill-intention, as though the author had written it. The author would like to record these facts here for avoiding misunderstandings or misdirections, arising from these kinds of cheats, of the comtemporary and future people.
get fucking real at least 95% of phd students would look at this like it's chinese unless they are in a very specific field
Worth mentioning that "I.F.", which presumably refers to Ivan Fesenko, is the author of widely publicized articles on IUT (see here and here ), was the co-organizer of two international workshops on IUT, and has made the claim "I expect that at least 100 of the most important open problems in number theory will be solved using Mochizuki’s theory and further development" ( source ). So saying that he "pretends to understand" IUT is certainly quite the claim.
After learning the preliminary papers, all constructions in the series papers of inter-universal Teichmuller theory are trivial (However, the way to combine them is very delicate and the way of combinations is non-trivial). After piling up many trivial constructions after hundred pages, then eventually a highly non-trivial consequence (i.e., Diophantine inequality) follows by itself! So it's all trivial, sans for hundreds of pages and multiple papers of prelims.
[ "I'm taking real analysis in the spring; what can I do to prepare?" ]
[ "math" ]
[ "6x6zxx" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.73 ]
I did got B/C+s in my linear algebra and differential equations classes, and As in calc. These were at a community college, and most of the classes were focused on computation rather than theoretics. The reason I didn't do as well as I would like is my poor study habits, not doing my homework to completion, and general laziness; now that I'm more disciplined, I'm ready to do this right, and I'm starting with Real Analysis. RA also serves as my schools intro to proofs class, and I want to give myself as large an advantage as possible before I start in January. What would you recommend to me?
Stop being lazy. Start learning how to write proofs. https://artofproblemsolving.com/articles/how-to-write-solution
Clear the decks in the rest of your life. This is a and requires 20 hours of work outside of lecture per week. Take a light course load, cut back your hours on your part time job, etc. Do whatever you can to enable yourself to make this class the #1 priority in your life. If you don't know how to do formal proofs, learn now. I've heard that How to Prove It by Velleman is good, at least it gets recommended a lot. Those two things make the biggest difference. Already knowing how to do proofs, and being able to commit a lot of time to the course.
The hard part of Real Analysis isn't so much the material (which is, after all, mostly the same material you already learned in calculus), it's that you have to write proofs and understand quantifiers. Get good at those things and you should have little trouble.
Second this. Also, trying to go as far into Spivak's Calculus as possible. If money is a pressing issue, there's always http://libgen.io .
I would suggest Velleman's as well. It is a very good book, and less expensive.
[ "Each Constitutional Value Not Let" ]
[ "math" ]
[ "6x5f0t" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.36 ]
null
I don't know how to tell you this, but nobody can understand a single thing you say. You might have a disorder that causes you to have unorganized speech. Please seek professional help.
There was/is medical corruption tied to courts/with them also with areas's crimes against children, and against elderly, too, blaming others for their failures with English, and as recorded criminally ill stewards of medicine, and of database relationships to our areas, and to all.
Yes, your opinions are manifest in relationship to lie progression coverup styles with much recorded uncharged originally/still.
I'm sorry if what I said sounded like a personal attack. I am just a little confused with what you're trying to say.
I'm saying that people saying they are confused was/is being used to not let databases crimes recorded from being reported. And we've recorded sick natures with our kids.
[ "Math paper shows large animal populations can quickly go extinct" ]
[ "math" ]
[ "6x5a8c" ]
[ 14 ]
[ "Image Post" ]
[ true ]
[ false ]
[ 0.69 ]
null
This happens when people are willing to spend a lot of money to eat rare species. It is extinction via a homoclinic orbit, for which the http://www.sciencedirect.com/science/article/pii/S0022519317302916
For some reason i dont have access to the articles. What are the differential equations this is generated from? And how are they modelling populations with what looks like a second order differential equation? Everything I see in population ecology is 1st order.
Hurrah for the arXiv and open-access math preprints (paper is mostly unchanged between the peer-reviewed one and the preprint) https://arxiv.org/abs/1703.06736
I'm paranoid that the article is somehow watermarked, so I won't upload the full thing, but here is the model: http://imgur.com/a/jjrec
Plot doesn't show what seems like the most interesting feature in this context (since OP mentioned "quickly go extinct"), namely the behavior as t increases and population goes to zero. Is there any finite time for which population = zero?
[ "Why is the Reddit math community so hostile?" ]
[ "math" ]
[ "6x5pos" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.41 ]
null
Plus all mathematicians prior to the 19th century including Gauss and Euler did mathematics without knowing the epsilon-delta definition of a limit. Yes, becuse those tools where not avaliable to them at the time. That is like saying we should not build houses using electric power tools becuse those were not around when houses where first invented.
I see /u/duckmath has an alt.
You say that most mathematicians know nothing of physics. Even if they remember very little, almost all will have learned at least basic physics at some point in their education. The parallel you draw between basic math and Latin is not valid - today, Latin is an esoteric, dead language, while math is necessary to understand science and is a fundamental part of society. We can observe what happens when a large portion of the population does not understand even basic science. Should a mathematician or an engineer ignore even the most well-known history? Certain knowledge is necessary in order to be aware and informed in society, even if that knowledge is not directly applicable to a career. Regarding rigor, nobody is saying that the work of Euler is worthless because he didn't prove his results rigorously. While Euler's results are amazing, it is still important to rigorously prove them eventually to be sure that they are true. Even in Tao's "post-rigorous" stage, mathematicians still arrive at formal proofs in the end. Whether or not all university students should have to learn epsilon-delta arguments is a matter of opinion of what constitutes "basic" math. It is not obvious what the standard for knowledge of math among educated people should be, but such a standard should certainly exist.
seems intolerant to any opposition to the idea that everybody should know math i've not observed this, personally; but granted, i certainly don't read every thread. Could you please link to a few recent examples? math is one of the best majors i've not observed this either, but then i would hardly be surprised by people who hang out on /r/math being really enthusiastic about doing maths as a major. Particularly if, in order to do so, they've had to work against common societal attitudes such as "But what is it?" or "So, you just do arithmetic on really large numbers?" and that mathematics requires "rigorous formal proofs". Well .... yeah. Intuition can be wrong. (Human intuition around probability, for example, seems to be awful, even if it was "good enough" for evolutionary purposes.) Intuition and heuristics certainly have their place in mathematics, but less so if one hasn't developed a good sense, , of how things work in a given mathematical field. Fields medallist Terence Tao has actually written a piece on his blog about this: " There’s more to mathematics than rigour and proofs ". He notes, for example, that "[t]he point of rigour is not to destroy all intuition; instead, it should be used to destroy bad intuition while clarifying and elevating good intuition." And the thing is, there's of bad intuition out there; so it's no wonder people place an emphasis on rigorous proofs. (Note that i'm using the phrase "rigorous proofs" not "rigorous proofs"; to me, a 'formal' proof is one that can be machine-checked. Most proofs in mathematics are not 'formal' in that sense, because they instead elide the tremendous amount of tedium that would result, for the sake of keeping focused.) most mathematicians know nothing of physics or even mathematics not related to their specialization, yet insist that people who don't know basic mathematics are uneducated. i'm really going to have to call "[citation needed]" on this. " mathematicians" insist this? My experience is something more akin to the opposite: the general public seems to wear lack of knowledge of mathematics as a badge of pride, in a way that they don't seem willing to do so regarding e.g. the works of Shakespeare or Turner. "Oh yeah, Turner, nah, I could never understand that shit, lol. Why should I have to learn it?" i'm not saying that the latter attitude doesn't exist; i'm just saying, i encounter it far less than i do the former attitude. So, yeah, i'd like to be shown some AMS survey results or something on what "most mathematicians" think in this regard.
Yet mathematicians REQUIRE 18 year old international marketing majors to memorize and learn epsilon-deltas This is a lie. Mathematicians don't set degree requirements for international marketing majors. If business colleges are requiring their students to take analysis, that is on them, not on mathematicians.
[ "Necessary and sufficient criteria for the geometry of the gear?" ]
[ "math" ]
[ "6x5qvv" ]
[ 24 ]
[ "" ]
[ true ]
[ false ]
[ 0.95 ]
In this post you can see that some shapes of the first gear are allowed, and some are not. Can the allowable shapes be characterized? That is, criteria on the first gear so that there exists second gear that will turn smoothly with it? Certainly convex is sufficient but not necessary. My next thought was star-shaped, but it's not hard to find some examples where this will not work (imagine a standard gear with very long skinny teeth). Extra credit points awarded for complete proofs.
I just want to say I love this question.
Certainly convex is sufficient but not necessary Eh? A circle is convex but makes for a shitty gear… Also I’d have guessed that not being convex was necessary, because you’d want some teeth and those will break convexity.
You're right. I guess we need to define what "works" means. I agree that if you want to actually rotate the other gear, the first must be nonconvex. I was mostly thinking of situations where the first gear would not turn. I guess there's two versions of the problem, now.
Consider if there was really high friction on the second gear, such that the second gear does not spin if the first gear is not touching it - a rectangle wouldn't work there. I think the criterion you really want is that it can 'pass' the second gear from tooth to tooth. Not that that's definitely impossible with a rectangle, but the corresponding gear wouldn't be a rectangle.
Y… One question seems to concern the ability to simultaneously rotate without mutual obstruction and the other concerns the possibility to transmit force. If you don’t want to transmit force, convex shapes work fine. But as soon as you do, I think you need at least 3 teeth… Also a rectangle is convex… I guess you could have a another rectangle rotating next to it such that their corners kiss every once in a while, but I don’t think that really catches the spirit being a “turnable shape“…
[ "Why is calculus a required subject?" ]
[ "math" ]
[ "6x58sk" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.33 ]
null
What schools require art history majors to take calculus?
Lmao any engineer will tell you they use every bit of math they learned on a daily basis, also if you are going to ask why does everyone need to have a basic understanding of math then you also have to ask why we have art and history classes forced from 1st-12th grade as most engineers don't need to know what alexander the great did in order to do their jobs.
Please don't feed the trolls.
Go away
Didn't even think about it, you're right
[ "Constructing a musical rhythm from successive golden sections." ]
[ "math" ]
[ "6x4vvj" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.78 ]
null
Hi Boykjie! Thanks for the note. I would post to r/listentothis but it's against the (stupid) rule that you can't post your own material. You're more than welcome to if the spirit moves you. Have a great weekend!
I was expecting something underwhelming, but this is actually quite musically interesting, could really be developed! Maybe not exactly the kind of stuff for /r/math though. Maybe /r/listentothis ?
Nice ! For my taste, it's a little bit all over the place. Maybe 2 second as a base unit interval is too long and 1sec would be enought ? Also, have you tried using it as a pitch ratio ? the g-ratio is approximately a sixth, and it's enharmonic inverse is a third, between the minor and major (6/5 and 5/4). I believe it's used as a modulation in blues. I mean, pitch bending between the usual third and the g-ratio third.
I actually built a four note scale using Golden Sections of an octave. The cent values ended up being: 0 cents, 283 cents, 458 cents, and 742 cents. You can hear the scale at work in a track at http://williamsteffey.com/2017/08/08/what-it-sounds-like/ I know there is another popular way to express Phi as 833 cents. I haven't gotten to experiment with that yet.
I know there is another popular way to express Phi as 833 cents. Yeah, its the one I had in mind I beleive. 2 is close to Phi. And 2 / 1200) is close to 2/Phi. The thing is, the golden ratio is the least approximable by an integer ratio, which are consonnant. Phi in music is precisely the opposite of harmony : dissonance. The tritone has a similar property : 2 is just sqrt(2).
[ "Lecture notes for an introductory course on algebraic topology given by John Baez and Derek Wise" ]
[ "math" ]
[ "6x3jh0" ]
[ 37 ]
[ "" ]
[ true ]
[ false ]
[ 0.94 ]
null
They're cousins!
Shit's about to get real in the weekly topology reading group
Definitely read that as Joan Baez and had to do a double take
thats so cool
*Handwritten notes from the blackboard. YMMV but this is probably not the kind of stuff for independent learning.
[ "Anyone know the equation to predict the twists of a mobius strip when cut with just information about how many twists the intial strip had" ]
[ "math" ]
[ "6x4ac5" ]
[ 3 ]
[ "" ]
[ true ]
[ false ]
[ 0.81 ]
null
I sort of ignored this question when I first saw it, but then decided to think more about what it is asking. I think you're talking about a Moebius band implicitly embedded in three space, i.e. either knotted or unknotted. If you do not care about the embedding, a moebius band with a 1/2 twist is unaffected by twisting by any further whole number of twists, but these are distinct when you think of them as embedded. (If you think of them as immersed there is a strange phenomenon that a whole number of twists has no effect, so they are classified into 4 types according to even numbers modulo (1/2)Z. ) Anyway, thinking of them as embedded, then if you slice with k evenly spaced slices if k is odd, there is one moebius band and (k-1)/2 bands with 1 whole twist, if k is even just k/2 with a whole twist, but these are linked. If you start with an unknotted moebius band the linking is according to the link which results as the closure of the 'half twist' braid. Really what is being specified on each component is one nowhere zero section of the normal bundle of an embedded circle, up to scalar multiplication, i.e. a global section of the projectivized normal bundle. This chooses one connected component in the space of trivializations of the trivial 2 plane bundle, i.e. a connected component in the space of automorphisms of the 2 plane bundle, or you could think of this as homotopy types of maps from the circle into the general linear group GL_2(R), or elements of the fundamental group of GL_2(R) which is Z. It would be fun to come up with a theory of braids with a section of the projectivized normal bundle, or 'braided paper bands.' This is a special case of the same construction for links, and there is a fun and closely related subject called Kirby Calculus.
Cut it thou the center here is a pic
Cut it thou the center here is a pic
https://youtu.be/wKV0GYvR2X8
Tadashi Tokieda cuts various combinations of loops and Mobius loops - with surprising results.
[ "Trouble with proofs" ]
[ "math" ]
[ "6x3pi5" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.38 ]
Hello dear fellow Mathemagicians, over the last couple of years I have spent a lot of time trying to prove pretty much everything that was shown to me. During school that was often no problem, although I often lacked an idea to start with. Since being in the University, trying to proove any theorems, axioms or just some "simple" equalities has been a huge pain. I take a couple of hours just to get the right idea how to start and then it also takes quite a while to get to a reasonable proof. As you see I'm having a lot of trouble. Now I'm wondering if there are any specific hints that you guys could give me on how to improve the pace of finding how you will start off the proof and maybe how you find a good way to actually formulate the proof. I hope some of you have found some techniques that help, and don't mind sharing them! Have a good night
I took a course called "Introduction to Mathematical Rigor." It was very helpful. https://www.math.uri.edu/~tsharland/MTH307/MTH307.html The textbook is free online: http://www.people.vcu.edu/~rhammack/BookOfProof/ Check out: https://math.berkeley.edu/~hutching/teach/proofs.pdf
What courses have you taken so far in University? Generally discrete maths, abstract algebra, or apparently real analysis are the first real intro to proofs courses that most people are exposed to.
Taking a couple hours to devise a proof for a random statement is not necessarily unreasonable. If you're just going through a curriculum, the nontrivial theorems often took years (if not lifetimes) to produce in the first place... Textbook exercises should (usually) be solvable in less time, but that's because textbook authors generally pick bite-sized, tractable problems as exercises.
If I'm having trouble with proving something in particular, and I'm confident I understand all of the relevant definitions, first stop is working with examples. If I'm trying to show structures of Form Y have Property X, I find it useful to come up with a standard example and an example where I think there's no way it could possibly work! The standard example helps you get the feel, and the non-standard will either give you a counterexample or turns out to work in a way that helps you think about the original problem. It can also help to relax the initial conditions - if you're trying to show something is true for pink, triangular, large mice, try a purple example. This should tell you why the restriction is there in the first place. I work in graph theory, where it's easy to construct relevant examples for my work. This might not be so easy elsewhere, of course.
Thanks a lot man, I will check that out, hopefully it helps! Have a nice day!
[ "Favourite Math Youtubers?" ]
[ "math" ]
[ "6x3khw" ]
[ 32 ]
[ "" ]
[ true ]
[ false ]
[ 0.87 ]
Just looking to get more math videos in my life. Doesn't have to be how to's either, just based on math in someway
Mathologer , 3blue1brown or whatever it is, standupmaths , minutephysics and Looking Glass Universe
What everyone else already said, and I also love blackpenredpen and PBS Infinite Series
numberphile and standupmaths are my fav
James Grime/singingbanana . He is just the cutest! Oh and I guess his videos are also interesting. Dr Peyam also made a channel very recently and his videos on blackpenredpen are really interesting
adding to what's already here...MIT opencourseware videos include full lecture series on calculus, linear algebra, discrete stochastic processes, financial math, etc...Harvard has lecture series on probability, abstract algebra There's a site called mindyourdecisions that does quick math and logic puzzles. THe puzzles are hit or miss and range from pretty good to the level of something based on PEMDAS that your grandma sends you 2BlueBalls or whatever has high quality videos
[ "Norbert Blum Has Acknowledged His P vs NP Paper Was Flawed" ]
[ "math" ]
[ "6x5c6v" ]
[ 818 ]
[ "" ]
[ true ]
[ false ]
[ 0.96 ]
null
It's good that he's finally acknowledged it. I wonder what effect this'll have on the talk he is scheduled to give?
He said it will take a while to formally write up the flaw so I would imagine it will be canceled.
The paper isn’t “fake”, it’s “mistaken”. The author did not intend to mislead
At the very least it will show that this particular approach will not work, which will be useful for further research.
The whole approach is doomed.
[ "Please i an so frustrated is this not the same thing? I have been right at the correct answer for so long." ]
[ "math" ]
[ "6x3dzu" ]
[ 0 ]
[ "Removed - see sidebar" ]
[ true ]
[ false ]
[ 0.43 ]
null
/r/learnmath It is definitely not the same thing. Multiply yours out; you will see that it is different.
Wrong subreddit, /r/learnmath would be better suited for this. However, expand out your answer and you'll see they're not the same.
Ive gotten tons wrong because ive typed 1x and 1y instead of x and y. I hate this.
Thank you. Ill check it out.
Your answer is (y-1)(14y+2) = (y-1)14y + (y-1)2 = 14y - 14y + 2y -2 = 14y -12y - 2 which is not the same as 14y + 27y -2
[ "I wonder that Europe and America have private institutions for preparing Olympiad" ]
[ "math" ]
[ "6x1v2d" ]
[ 5 ]
[ "" ]
[ true ]
[ false ]
[ 0.86 ]
I always curious about preparing Math Olympiad in Europe and America. Our country has a lot of private academies for Math Olympiad. So, the students who want to enter these academies must finish entire middle school and High school courses during elementary school(I did once. but when I was in middle school, I gave up to enter the academy.) These students spend time on solving problem and learning new theories every day. I think they spend 5~8 hours for study math per day. In conclusion, my country is top on IMO. Let me ask Europe and American students, Are there any private institutions for Math Olympiad in your country?
In America students who participate from the IMO come from schools all over the place. They qualify by taking the USAMO and are trained in a summer program called MOP at CMU. In Hungary a lot of the IMO participants come from a high school called Fazekas Mihály Gimnázium
I am an American student. I competed in the USAJMO (US olympiad for 10th graders and below) twice and the USAMO (US olympiad for 11th and 12th graders) twice, so I think I can answer this. There's no dedicated math olympiad schools (that I know of). Olympiad training mainly comes from buying your own books (although some contests give books as prizes, like USA Mathematical Talent Search) or buying courses. There's a lot of summer programs with a focus on getting better at math contests, and a company called Art of Problem Solving (AoPS) makes a lot of resources. The AoPS resources are internationally available but they're an American company and they're used by a lot of American kids I know. They make books for lower level contests, and offer an online class called WOOT for higher level stuff. The winners of the USAMO and USJMO get invited to the Math Olympiad Summer Program. I went there twice; we went to some lectures on mathematics given to us by former contestants or advanced current contestants (US IMO team members gave some talks, for example). We also did some contests while there as both a way to practice and to help the USA try out the IMO team. Studying varies a lot person to person. I personally studied ~8 hours a day during 11th and 12th grade (very on/off studying before that), but most of that was learning undergraduate math, not really prepping for contests (tho I did do contest prep). I knew some people who dedicated ~5 hours a day to just contest prep. A lot of people I knew would have some days where they spent a long time doing math, and some days where they just did it passively (i.e. working on some olympiad problem mentally while at school). Schools don't seem to work with us here. Mine actively worked against me--they tried to get me not to take the USAMO because it took to much time out of school and they thought I needed to pay attention to my junior and senior year classes, my math classes were never challenging and the school was very resistant to me moving up a level in math, and the only time I got to prep in school was during math team. my country is top on IMO In which year? USA is also pretty damn good at IMO.
Also, about IMO selection: We do it weird. The current procedure is you take something called the TSTST while at MOP. The best people on the TSTST get invited to take TSTs. The people who got the highest combined score on the TSTs + some other olympiads (including USAMO) get invited to IMO.
alpha af
Thanks! You have a good day too.
[ "Where to find this article by Hartmanis?" ]
[ "math" ]
[ "6x1lg2" ]
[ 3 ]
[ "" ]
[ true ]
[ false ]
[ 0.81 ]
I want to read this article: Solvable Problems with Conflicting Relativizations by Hartmanis. But I can't find it. What is the best place to look for these things (when Google scholar doesn't show anything)? Its probably more compsci related but they don't allow text posts :(
Quick googling turns up many things that cite it. It appeared in the Bulletin of the European Association for Theoretical Computer Science in 1985. No evidence of that journal being online from that time period, so your best bet is to find a university library that has it and request inter-library loan (this assumes you have university affiliation).
Solvable Problems with Conflicting Relativizations by Hartmanis If you google that you will find several things that cite the paper by name.
Where was it referenced from? I can't find any evidence of a paper with that name ever being published. And it's Juris Hartmanis you're looking at right?
Non-Mobile link: https://en.wikipedia.org/wiki/ICanHazPDF /r/HelperBot_ /u/swim1929
Cool! Didn't know this twitter trick.
[ "Could someone explain me this? [text on comments]" ]
[ "math" ]
[ "6x0upl" ]
[ 338 ]
[ "Image Post" ]
[ true ]
[ false ]
[ 0.93 ]
null
Infinity is an attracting fixed point.
Here 's a good rundown.
So, I started studying dynamical systems and I am a bit confused after this post: I do get what both Julia and Fatou sets are. The Fatou set F(f) of a rational function f: Ĉ --> Ĉ is the greatest open subset of Ĉ (with respect to inclusion), such that if we restrict f to F, the iterates f form a normal family, i.e., every sequence has a subsequence that converges uniformly on a compact subset. The Julia set J(f) is the (closed) complement of Fatou ( J(f) = Ĉ \ F(f) ). I am ok with the intuition that F is the well-behaved part of Ĉ, and J is the chaotic one. But after reading this post, I am a bit confused. I mean, doesn't it seems that the Julia set is well- behaved and Fatou is chaotic, and not otherwise? Could anyone help me with this, please? Thanks in advance :)
You can realize infinity as a "real" number, at least for topological purposes, by embedding C into the riemann sphere (by the stereographic projection, say) and the image of C will be all of the sphere minus one point which you can take to be infinity. This is a geometric realization of the topological notion of one point compactification.
I wouldn't if surprised if this post was written by a bot...
[ "[Large numbers] I need some ideas for giving meaning to the significance of a number that is extremely large" ]
[ "math" ]
[ "6x0rxw" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.5 ]
[deleted]
How did you calculate this? SHA-512 is from the SHA-2 family, which is a completely different algorithm than SHA-1, so it doesn't really make sense to compare them since I believe the Google researchers exploited some specific features of the SHA1 algorithm. A better way might just to be finding the how many hashes/second the best rigs can compute, and divide 2**512 by that (not quite correct but I think a reasonable approximation).
"pretty small" is an understatement. From a computational complexity standpoint, it's barely distinguishable from 2. It doesn't even take special notation to represent.
Just say that with current technology (and foreseeable future technology), a hash won't be broken until (find the appropriate position at https://en.wikipedia.org/wiki/Graphical_timeline_from_Big_Bang_to_Heat_Death ).
10 is approximately 2000 times larger than the number of chess positions 10 is approximately 120,000 times larger than the Monster group.
This is assuming protons decay in the first place
[ "What are some tools/websites for learning undergrad&grad Maths. And also what are some things I can learn in 15 minutes that will prove invaluable." ]
[ "math" ]
[ "6wznws" ]
[ 37 ]
[ "" ]
[ true ]
[ false ]
[ 0.86 ]
Hello. First of all, this might have been asked before, but after 2 hours of googling which bring close-minded results like "All you need is a good brain and a pencil", I had enough :). We're in 2017, there surely have to be some great hidden gems in the depth of the internet. So, here are the two questions I'm asking: * What are some tools/websites for learning undergrad (&grad) Maths? * What are some things I can learn in 15 minutes that will prove invaluable? Moreover, I will do all my best to . (I also added some of the resources I know. EDIT: First batch of resources updated. Thanks a lot for them, I had a quick browse and some of them seem extremely useful.
I don't really think Khan Academy is good for any undergrad level math. His linear algebra and calculus courses are very basic imo. You might want to check out Evan Chen's Napkin. It's a project where he tries to explain a lot of different facets of undergraduate mathematics. It's not the best piece of literature on those subjects, but it's certainly a sound introduction to those topics, as it's meant to be.
If you're doing commutative algebra or algebraic geometry, the stacks project is incredibly helpful. For category theory, ncatlab .
I can understand that. The main issue is that he assumes readers have extensive contest experience. I did so I didn't notice that much, but glancing through again in the first few chapters he makes references to some olympiad problems and some techniques that very few people would learn unless (a) they don't need Napkin because they've already taken college math or (b) they do olympiads so they study a very narrow branch of college math in a lot of depth but neglected pretty much everything else.
Symbolab and Paul's online math notes are also good websites.
Honestly, if you're planning on learning undergrad and grad math, you're best bet is to read books.
[ "I'm a professional software developer that wants to go back to college (dropped out) to major in mathematics. What am I in for?" ]
[ "math" ]
[ "6wz4rn" ]
[ 3 ]
[ "Removed - see Career & Education Questions thread on front page" ]
[ true ]
[ false ]
[ 1 ]
null
I major in mathematics and there's definitely some harder upper math classes. For me my probability and statistics classes were the worst. Of course it depends on your professors and some math courses just click more for certain people! As long as you put in the work you'll be fine. Good luck!
I'm wanting to major in math because I want to do research in AI (and would like to be prepared for the underlying mathematics)
I'm wanting to major in math because I want to do research in AI (and would like to be prepared for the underlying mathematics)
Why not just do a CS undergrad and take the necessary math/stats for machine learning?
I suppose that would be best
[ "Everything about Model Theory" ]
[ "math" ]
[ "6wzs6z" ]
[ 71 ]
[ "" ]
[ true ]
[ false ]
[ 0.96 ]
Today's topic is . This recurring thread will be a place to ask questions and discuss famous/well-known/surprising results, clever and elegant proofs, or interesting open problems related to the topic of the week. Experts in the topic are especially encouraged to contribute and participate in these threads. Next week's topic will be . These threads will be posted every Wednesday around 10am UTC-5. If you have any suggestions for a topic or you want to collaborate in some way in the upcoming threads, please send me a PM. For previous week's "Everything about X" threads, check out the wiki link To kick things off, here is a very brief summary provided by wikipedia and myself: Model theory is a branch of mathematical logic that studies models satisfying a theory. A very rich area of mathematics which intersects with other branches through analogies and applications, it has been developed into different subbranches with different foci. Classical theorems include , and the . Further resources:
My area of study is "homogeneous structures" which studies a very special kind of model, a Fraisse structure. I use model theory in a much more combinatorics focused way. At a high level, a Fraisse structure is an object defined using some number of relations with (countably) infinitely many points, and the object is "generic" in the sense that it contains every possible finite substructure with whatever combination of those relations you want. Here the relation is the binary relation "has an edge". It turns out that if you take the natural numbers and flip a coin for each pair of numbers (heads: edge, tails: no edge) then you'll most likely end up with the Rado graph; that is, a graph that has every finite graph as a subgraph (prove it!) and every two versions of the Rado graph generated like this will be graph isomorphic. The heart of the proof comes down to two observations: The 1-pt extension property says "given any two finite disjoint subsets A, B of nodes in the Rado graph, there is a new node with an edge to every node in A, and no edge to every node in B". (Prove this!) That property is really capturing the notion that the Rado graph is universal (has every graph as a subgraph) and is homogeneous (you can send any finite subgraph to any other isomorphic subgraph via an isomorphism of the whole Rado graph; this is stronger than the usual homogeneity that just asks that you can send any point to any other point). Here's a more detailed writeup of these notions Imagine that you put all countable metric spaces (with distances in the rationals) into a hat, and pull one out. What will you get? The (rational) Urysohn space! The same type of construction works as in the graph case, except this time, your have a binary relation for every (positive) rational number (that tells you two points are at that rational distance). Here your 1-point extension is a little more subtle, its: for every finite metric subspace X of the Urysohn space, for every function f : X \rightarrow Q, if that function describe distances of a point to the points in X, then a point in the Uryoshn space that witnesses this. (It's more complicated that the graph case because of the triangle inequality - which doesn't happen in graphs.) I wrote this up carefully here. These structures show up pretty naturally in mathematics, and their automorphism groups are naturally very rich (they are homogeneous so they already have a lot of automorphisms). There's a very deep result called the Kechris-Pestov-Todorcevic correspondence that connects (structural) Ramsey Theory to topological dynamics. Write up here. There's also a deep connection with (structural) Ramsey Theory and the classification of closed supergroups of an automorphism group . This is much more "pure" model theory. I'll leave you with my favourite open problem (that is quite hard). I give you a graph G, and you give me any two partial graph isomorphisms of it; i.e. you first identify two copies of the same graph H_1 in G and tell me how they are isomorphic (via f), and then you find a second pair of isomophic subgraphs H_2 and tell me how they are isomorphic (via g). Now, the question is: Is there a graph isomophism h defined on all of G such that h extends f and g? The answer is no, so we instead allow you to make G even larger (so long as you don't touch H_1, H_2 and their isomorphic twins). The answer now is yes! There's a beautiful undergrad level write up of this fact here . This was first proved in 1992 by Hrushovski (a model theorist) and now we say that "graphs have the Hrushovksi property, or the extention property for partial automorphisms (EPPA)". This property had many interesting (model theoretic and dynamical properties). It's very natural to ask if other Fraisse classes have this property. For metric spaces (i.e. the rational Urysohn space) this was proved independently in 2005 (and slightly later) by Solecki, Vershik and some others (Pestov, Sabok, Christian Rosendal). A tournament is a directed graph where between any two nodes there is precisely one arrow. It is known that the countably infinite "Rado" tournament is a Fraisse structure (the proof of the 1-pt extension property is basically identical to the Rado graph version). The open question is . That is, in my paragraph above for graphs, replace all instances of the word "graph" with "tournament". The question is subtle enough that I've presented false proofs before! It seems to be related to interesting graph theory (and there's a chance it's related to the fact that odd groups are solvable, which would signal that the problem is extremely hard).
Did you mean to sticky this? Also, for topics like this where some of us regulars know enough to write a "one-page" summary, it might be better in the future to give us a heads up beforehand. I don't have time right now, but I'd have been willing to do a high-level overview of model theory.
Recently, model theorists have made tremendous breakthroughs in algebraic geometry (broadly defined). The work of Pila and Hrushovski definitely come to mind. I have seen several expositions which put the model theory in a back box. But then I am not a model theorist. Can someone explain precisely what makes model theory so much powerful? What is a quick way to understand these tools well enough - references are welcome. Thanks!!!!
The standard model of arithmetic is the usual natural numbers we all know and love. There's several ways it can be characterized. Let me give a few. is the unique model of arithmetic so that every element has finitely many predecessors. is the unique model of arithmetic so that its arithmetic operations are computable. (This is a famous theorem by Tennenbaum.) is the unique model of arithmetic which embeds as an initial segment into every model of arithmetic. So if you think any of these notions---finiteness, computability---are well-defined, you have a way to single out the standard model of arithmetic. Can someone explain how we know the axioms of TA are well-defined? TA is well-defined for the same reason that the Tarskian satisfaction relation for a structure is well-defined. TA is just the special case where this is applied to . Given a structure and a formula φ in the language of (possibly using elements of as parameters) we want to be able to say whether φ is true in , or synonymously, whether satisfies φ. This is defined recursively. Atomic formulas are true just in case they really are true. For example, satisfies 1 + 2 < 3 + 3 because 1 + 2 really is less than 3 + 3. Boolean combinations are then defined in the obvious way; e.g. satisfies φ and ψ iff satisfies φ and satisfies ψ. Quantifiers are then done in the only way that makes sense; satisfies ∃ φ( ) if there is some in the domain of so that satisfies φ( ), and similarly for universal quantifiers. In this way, we define the satisfaction class for ---i.e. the set of all formulae in the language of which are true of . Given the satisfaction class over one can define the of . This is just all formulae in the satisfaction class for which don't have any parameters from . TA is defined to be the theory of . So what does it take to generate satisfaction classes? They're defined recursively, via a recursion of countable rank (it's recursion along a tree, not along a linear order, but that's no issue). It only takes a weak fragment of set theory to prove that recursive constructions like this can be done, far far far below the level of ZFC. Alternatively, there's a computability theoretic way to get at TA. Given a set let ' (" jump") be the Turing-jump of . That is, ' is the set of all (indices of) Turing-machines with as an oracle which halt when given their own index as input. For example, 0' is just the halting set for ordinary Turing machines (0 here being used to mean the empty set). (Detail: this depends on just how you formally define the halting set. But given any reasonable definition they are mutually computable from each other.) We can then iterate the jump operation: is the -th iterate of the Turing-jump on . Not only can this iteration be done finitely long, but it can also be done infinitely long, iterated along any (countable) well-order. So 0 is what you get by iterating the Turing-jump countably many times starting from the empty set. Then 0 and TA are mutually computable from each other. In short, each time you take the Turing-jump it lets you figure out the truth of formulae with one extra quantifier in the front. So iterating the Turing-jump countably many times lets you figure out the truth of formulae with any number of quantifiers in the front. This shows that TA is , i.e. computable by iterating the Turing-jump along a computable well-order, specifically the well-order ω. That it's not , i.e. computable by iterating the Turing-jump finitely many times, follows from Tarski's theorem on the undefinability of truth.
I'd be happy to help. For that one, I think what I wrote a while back is probably enough of an intro. Finding it is easy, it's the only gilded post I have. And honestly, I think anyone would struggle with trying to write intros to such a variety of different fields on a regular basis. The last few, I would've had nothing useful to contribute. You're making a solid effort. I'm not sure how you pick your topics, but one approach might be for you to make a post a few days before the all-about post just letting people know what the next topic is. That way anyone who's here regularly will know in advance, and can offer assistance if they feel like it (that takes the burden off you to ask/know who knows what and lets people decide silently to help or not). You could also put said announcement of the next topic in each all-about thread, presuming you decide them that far in advance.
[ "When Pi is Not 3.14 | Infinite Series | PBS Digital Studios" ]
[ "math" ]
[ "6wzn7f" ]
[ 11 ]
[ "" ]
[ true ]
[ false ]
[ 0.68 ]
null
Always, because pi is 3.14159.... You can't truncate a number and expect it to remain the same, right, /u/mybirthdaye ?
That is worse than what got linked to badmath. Wow.
That is worse than what got linked to badmath. Wow.
He's right though even though he's a crank. The quote above is incorrect. He said "many equations of physics" instead of "many digits". Pi appears so much partly because circles are so apparent in nature. Maybe that's still wrong, but I agree with that statement.
This thread.
[ "Question on disk and washer method" ]
[ "math" ]
[ "6wyss0" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.6 ]
I'm writing a paper on the volume of a torus using disk, shell, and washer methods and im a bit lost. What's the difference between the disk and the washer method in the case of a torus? When i use the disk method, the cross-sectional area is itself a washer and so i end up doing the exact same steps for both the disk and washer methods. What's the difference, in the case of a torus, is basically my questio
The disk method is inapplicable. Washers for shapes with a hole, disks for shapes without.
Hmmm that actually makes sense.. when i used the disk method i ended up getting a washer cause the cross section of the torus is just a circle with a hole in it, thank you man
I am not familiar with the disk washer method. But here is a rough idea. You take the torus and cut it into very thin cylinder. if A is the area of the cross section of each cylinder (probably this is the disk) and dx is the thickness then the volume is Adx. Now the total volume is \int A dx. integral over what? Well over the domain of dx which goes around the the longitudinal circle (the washer?). x is the arc length function of a circle hence dx =2 \pi r dt where r = radius if the circle and t goes from zero to one. Change of variables gives you 2 \pi r A. More Intuitively. You can take a torus cut it along a disk and then make it into a cylinder. This will involve stretching but that doesn't change the volume. Hence you can apply the volume formula of a cylinder. You need to figure out what the dimensions of this cylinder will be.....
Yes those steps i am familiar with. My problem is with the naming and the difference between methods. Thank you though
These aren't standard names. These are just terms that your teacher or textbook is using to explain what the cross-sections look like.
[ "Highschooler looking for feedback/suggestions on a math essay idea" ]
[ "math" ]
[ "6wygmo" ]
[ 12 ]
[ "" ]
[ true ]
[ false ]
[ 0.78 ]
[deleted]
I think you could definitely write a 3000 word paper on the construction of the reals, provided you talk about different methods and the history behind each one of them. Then, as you mentioned, you could write about how these constructions allow one to show \pi a real number, etc. Maybe even adding some discussion on the cardinality of the continuum would be fun. PM me if you need help on any of this.
I'd totally go for your original idea. I mean, I tend to have the opposite problem - my essays always end up too short because I don't have enough to say. I envy you if you're worried about going overboard. the Eudoxus Reals.
You're not meant to actually talk about the history of whatever you're writing about for its own sake with the math EEs. Perhaps in passing but you are meant to really focus on developing the mathematical ideas.
The people who grade the essays have to pretend they know nothing outside the syllabus This is only a requirement for the Math IA--not the EE. Making an assumption of "nothing outside the syllabus" wouldn't work for the EE since there are people ranging from Math Studies to Further Maths. I personally did my math EE on the RSA cryptosystem and various factorization methods (most major Category 2 algorithms and Shor's algorithm). Obviously that's beyond the syllabus of even FM. I feel like a comparative essay on Cauchy sequences, Dedekind cuts, and Eudoxus Reals constructed from Z would be really interesting, but I'm struggling to think of how I'd be able to justify it to my advisor. You're still right on this regard though--you probably can't justify a comparative essay to your advisor. Criterion A by itself is just whether or not you have a solid investigatable question. You can try to go the route of a statement of discussion, but I suggest against it. Basically, you're not going to score highly on it unless you nail down a specific question you want to answer first--saying "I want to compare ___ with ___" isn't gonna cut it for the IBO. Although if you do decide to just do a comparative essay, you can take solace in the fact that you probably still wont the EE.
The people who grade the essays have to pretend they know nothing outside the syllabus This is only a requirement for the Math IA--not the EE. Making an assumption of "nothing outside the syllabus" wouldn't work for the EE since there are people ranging from Math Studies to Further Maths. I personally did my math EE on the RSA cryptosystem and various factorization methods (most major Category 2 algorithms and Shor's algorithm). Obviously that's beyond the syllabus of even FM. I feel like a comparative essay on Cauchy sequences, Dedekind cuts, and Eudoxus Reals constructed from Z would be really interesting, but I'm struggling to think of how I'd be able to justify it to my advisor. You're still right on this regard though--you probably can't justify a comparative essay to your advisor. Criterion A by itself is just whether or not you have a solid investigatable question. You can try to go the route of a statement of discussion, but I suggest against it. Basically, you're not going to score highly on it unless you nail down a specific question you want to answer first--saying "I want to compare ___ with ___" isn't gonna cut it for the IBO. Although if you do decide to just do a comparative essay, you can take solace in the fact that you probably still wont the EE.
[ "Do stacks, sheaves, schemes, sites, toposes, and other fancy machinery have anything to do with nice objects like conics, cubics, and quadrics?" ]
[ "math" ]
[ "6wxlg5" ]
[ 46 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
I'm working through "Conics and Cubics" by Bix, and "Geometry: A Comprehensive Course" by Pedoe. I enjoy the concrete calculations that give me insight on beautiful geometric objects. For example, I am in awe of the fact that given two conics in different planes of the affine space with two common points, and a point not in these planes, there is a unique quadric which contains these conics and the point. This is just the tip of the iceberg; there are many more sublime facts about these simple geometric objects. When I do a simple search on contemporary research activities that have ties with what I currently enjoy studying, I am led inevitably to the field of modern algebraic geometry. For example, an encyclopedic reference for algebraic geometry is the "Stacks Project". Browsing the chapters, I am struck by how abstract the concepts are: huge amounts of language must be developed first; things like sheaves, schemes, sites, toposes, categories, groupoids are given infinite attention. But there's nothing to suggest that these concepts have anything to do with the geometric objects I enjoy studying. Are conics, cubics, and quadrics no longer fashionable, as is Euclidean geometry? Is algebraic geometry concerned with abstractions for more serious mathematics that have very little to do with humble geometry? Why do people care about toposes, for instance? Is there any field which is concerned with the "humble geometry" I'm studying, or must I search for a different field if I want to go to graduate school?
You're intrigued by classic geometry because you understand it and you haven't run up to its limitations yet. It's all new and sexy. But it's been done. Yes, it's useful under specific conditions, but those conditions limit the amount of novel work that can be accomplished. It's like seeking to make a physics career out of Newtonian Kinematics. No one is saying Newton's laws aren't useful or interesting, but GR exists because those old laws didn't give you the full poop. Do not limit yourself to what you think is interesting as an undergrad. Abstraction and generalization are the corner stones of mathematics.
Modern algebraic geometry was ( and is ) inspired in the study of such objects, but for different reasons it has been necessary to develop more abstract and broad objects, and while the original super geometric picture might not be obvious, it is still present. People still do 'classical' algebraic geometry in some sense, so you could definitely end up working in more down to earth problems involving surfaces and curves in the more traditional sense, although for many of these problems classic techniques might not be all that powerful and so in many cases you should at least know some of the more algebraic/modern techniques used to study these objects. Sheaves are a categorical and fancy way of studying local information on your space. They are very general but they are very natural and it's not hard to come up with plenty of examples in more down to earth geometry. Schemes came to be in part because, in the algebraic geometric sense, classical algebraic varieties lacked some information which would be good to have. They also allow you to use the full power of algebra to study these objects. For example schemes allow nilpotent elements. Stacks can also be motivated by a couple of technical issues, one through moduli problems which are not well behaved for schemes or through quotient schemes. Toposes are very general objects with many interesting properties, but in this context they are a great setting to do geometry, as topology doesn't really reflect all the geometric information in the spaces, sites were needed for a better setting and thus sheaves on sites ( toposes ) will tell you things that topology wont. Like I said, a lot of the things modern algebraic geometers care about are inspired and motivated by these more classical problems with classical objects, although the area has grown quite a bit and it now intersects a lot of stuff, like number theory and logic and other things. I dont know how far are you in your studies, but I would recommend you to read a book on classical AG, like Shafarevich for example. A classic result is the one about cubic surfaces having 27 lines and he covers that in this book, so I feel you might enjoy it.
This is very true. I would add that on a personal note, I did not see the point of some of this abstraction a few years ago, and now I find some of it very interesting. A lot of these ideas are only interesting (to me) when someone shows you the right perspective on them, or the critical example that they enable you to work with.
OK here is another answer, the Bezout theorem in its simplest form says that two curves in the plane of degree m, n meet at mn points if you count multiplicity. But it is only true if you use complex coefficients and work in the projective plane. Now, you have to say, what is a definition of a curve in the projective plane supposed to be? What is the definition of the multiplicity of two curves at an intersection point supposed to be? Also some people like to say, can I make sense of the curve without needing to think it is embedded in any particular larger projective space? One very common trick is to take a curve in the plane, take any line at all, and look at the points (with multiplicity) where the line meets the curve. The total number of points with multiplicity is a definition of the degree of the curve in that embedding. But you can actually recover the larger projective plane just from such a finite set of points. One old fashioned way of doing it is to think, as the line moves around, this finite set of points moves around. The possible configurations of this finite set with multiplicities is called a 'linear system of divisors.' There is one line in the projective plane for each 'element' of the linear system. The theory of 'line bundles,' 'linear systems' and 'divisors' gives a way of relating different projective embeddings of a curve. For curves of genus larger than 1 there is a 'canonical linear system' (there always is but it becomes nontrivial when the genus is larger than 1) which has to do with differential forms, and the subject acquires a beauty and symmetry. Regarding your example where you say given two conics in different planes in R3 with two common points, and a point not in these planes, there is a unique quadric which contains these conics and the point. I owe you a proof of this in the language of divisors and linear systems. First step, is a hyperplane section of a quadric going to be a conic? The 'degree' of the quadric is supposed to be found if we intersect with more and more hyperplanes until we get a finite set of points. This is just going to be that two points in the intersection of 2 planes. Hmmm... the linear system of quadrics in projective 3 space is I think 9 dimensional, and for what you're saying to be true, the quadrics that pass through those two conics has to be just 1 dimensional so there can be one through each point. A quadric picks out a conic in each plane, the linear system of conics in a projective plane is I think 5 dimensional so if you fix a plane and look at the quadrics which meet the plane at each conic that should be 9-5=4 dimensional. What must happen is that once you look at the 4 dimensional linear system of quadrics that meet one of the planes at a fixed conic, there must be just a 1 dimensional linear system of these that meets each conic in the OTHER plane which happens to pass through those points...for some reason....um....having to do with how everything is configured. Having to do with how the linear system of conics in the OTHER plane that we're considering is those meeting two points. It sort of works if we think that plane conics is 5 dimensional and those passing through 2 points is probably 3 dimensional so we are looking at 9-5-3=1. Edit: I do understand how easy it would be to get lost if you start randomly searching these terms, because people always write about generalizations or cases where things have difficulties. One place to start is understanding the one dimensional projective space, the projective line which is the same as the Riemann sphere. It is also the unique smooth curve of genus zero. The notions of divisors and linear systems have to do with 'rational functions' and a 'rational function' on the Riemann sphere is the same thing as what people call 'ratioanl functions' in precalculus math. It is useful if you have like T/(T +1) to write T=y/x and write it as (y/x)/((y/x) +1) and rewrite this as a ratio of two homogeneous polynomials in x and y of the same degree. I'll leave it as an exercise to do that. Then the values of (y/x) where x or y can be zero but not both are the points of the Riemann sphere, and the function sending (y/x) to that quotient of homogeneous polynomials is a well defined function from the Riemann sphere to itself. It is a covering space except at finitely manyh points. A linear system is always the projectivication of the vector space of rational function with poles no worse than some particular divisor. This is all really elementary and explicit, so instead of searching terms like 'linear system' in google, try working out an example, take the vector space of rational functions which are allowed a pole of degree n at the point (y/x) where x is zero and y is 1. The corresponding linear system is just all n tuples of points in the Riemann sphere. If you think of this point as a 'point at infinity' when you delete it you just get the complex number line. The actual rational functions with no poles except at infinity are ordinary polynomials in one variable. In this way there is a bijection between polynomials in one variable of degree at most n, modulo multiplication by scalars, and n tuples of points (including multiplicity) in the Riemann sphere. This is not surprising because when you ignore the point at infiinity you are talking about sets of at most n points in the complex line, and you are talking about how polynomials are determined by their roots.
The conic section is a great object to illustrate the purpose of stacks. A conic section may be characterized by its eccentricity. If e=0 you have a circle. If 0<e<1 you have an ellipse. e=1 is a parabola, and e>1 is a hyperbola. So the space of possible eccentricities is the real half line [0,∞). If we assemble all the conic sections into a single space, so that similar conic sections can be "close together", we will have a single geometric object which we can use to reason about all conic sections. The moduli space. However there's a problem here; there may be more than one conic section of a given eccentricity. OK, we know that any two conic sections with the same eccentricity are isomorphic (think congruent, or similar). So we might want to agree that it's ok to just use one conic of each eccentricity. Or we might think it's important to keep all the conics, and just record which ones are isomorphic to each other. The latter option is a stack. You could also consider the stack of cubics, or quadrics.
[ "Lost Turing letters just discovered." ]
[ "math" ]
[ "6wwxm3" ]
[ 26 ]
[ "" ]
[ true ]
[ false ]
[ 0.92 ]
null
Interestingly the collection contains a papyrus scroll signed by Turing advocating the use of 60 symbols in a Turing machine tape alphabet. It makes sense because trigonometry was used a lot in the war.
Turing machines can't represent/operate on real numbers to full precision. NJW was right all along!!!
https://www.youtube.com/channel/UCXl0Zbk8_rvjyLwAR-Xh9pQ
NJW ?
Will Numberphile do a video about these?
[ "Irrational Number Part 1" ]
[ "math" ]
[ "6wxxhw" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.27 ]
null
This video is wrong. Rational numbers are NOT exactly the numbers that have finitely many digits. For example 1/3 = 0.3333... is rational. The sum of two irrational numbers is NOT necessarily irrational. For example π and -π are irrational, but π + -π = 0 is rational.
"Closed minds"? You mean: "basic desire to be correct"
4 pi. I don't doubt you taught, I spend the first month of every calc one class undoing the bullshit people like you do.
I am not trying to say the definition in the video. I know the definition. Buddy rational numbers are the ones where you know where the number ends. Clearly you know the definition, given that you've given an incorrect definition twice; once in the video and once in a reply stating that you're wrong, affirming that you were in fact correct. The video is meant for an easy to digest understanding. No, the video is outright spreading misinformation. You're going to get viewers who come away from the video thinking that 0.3333... isn't a rational number, even though it is. This would be tantamount to me saying that the force due to gravity is constant at all distances from a body. Which you aren't getting because you can't see beyond textbook definition which is sad. We the textbook definition because in mathematics, things need to be . We need to make sure we're all talking about the same things. Don't blame textbooks for you not being able to understand what a rational number is. And what was being discussed was 1/3 = 0.3333.... And not 0.3333 Then YOU should have put an ellipsis after 0.3333 in your original reply. Communicating badly and then acting smug when you're misunderstood is not cleverness. 0.3333.... is not equal to 0.3333 Nobody but you was saying that it is: That's why you can write 0.3333 as 1/3. And I responded that if you thought that 0.3333 = 0.3333..., then you'd also have to think that 0.33 = 0.3333..., which leads to a nonsensical conclusion. Don't blame ME for your own shortcomings. I am Bachelor's in Physics. And I love mathematics. I have taught mathematics to preschool till high school in my early college and post college days. Then you'd KNOW how wrong your definition is. You guys are insane. I am sorry for your closed minds. Right, you've bordered into crankery and unnecessary insult now.
Buddy rational numbers are the ones where you know where the number ends. No, that's not true. A simple glance at any good online resource would tell you that: In mathematics, a rational number is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q. a number that can be expressed exactly by a ratio of two integers A rational number is a number that can be expressed as a fraction p/q where p and q are integers and q!=0. It's got NOTHING to do with the decimal representation of the number. That's why you can write 0.3333 as 1/3. No, you write 0.3333 as 1/3, because 0.3333 is 3333/10000. That's NOT equivalent to 1/3. I bet you'd also write 0.33 as 1/3, too. Which means that you think that 0.0033 = 0 since that's the difference between these two decimals. If you meant 0.3333..., you should have put in the ellipsis. The fact that you didn't just makes your comment look more uninformed when you've already committed a pretty serious error.
[ "Mathematicians, how important was math to you in gradeschool?" ]
[ "math" ]
[ "6wwkk9" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.89 ]
I'm curious to know how math impacted your life at an early age. Did you have someone who ignited your interest, was it not even on your radar, what? I'm asking in particular because I have a 7 year old whose life is numbers. He's getting nothing but basic addition and subtraction in school, and every year I'm told that it's much more important for him to focus on social skills and communication and math isn't a big deal. I think those things are important, but he's fine in those areas and I feel like I am letting him down by letting them completely neglect what is an obvious natural talent/passion. I'm not a math person myself so I don't know if this is something I should continue to push, and I would love some input from those of you who have been through this personally.
I didn't fall in love with math until I was about 18, and I was only exposed to higher math because I needed it for my CS degree. You should do everything you can to help him pursue his passion. Math a big deal, and there's no reason to limit his potential (especially if he's doing well socially). It's not often that a 7-year-old will display such passion, especially for something like math. My advice? "Special education" is not just for those falling behind. If the school can't or won't provide the advanced instruction your son needs (yes, ), you'll need to look elsewhere. Is there a Gifted and Talented Education program in your area? Try looking into a local community college; they should have arithmetic classes. He may even be able to handle basic algebra if you remember enough to help him out when he needs it. I wish I had fallen in love with math when I was that age. I probably could have, if the school and my parents had understood my needs. As it was, I grew to hate it; it was taught too slowly and simply and seemed too tedious. It wasn't until I got to the more advanced pace of college classes that I began to enjoy it. I've been there, done that. Make sure your kid is being challenged; I promise you won't regret it.
No need to worry too much. Throw the following book at him and let him burn some hours. https://imaginary.org/sites/default/files/5to15_en_gb.pdf
I'm sure those teachers are wonderful people, but they're part of the reason the US is 38th in math . Whether they think it's important is irrelevant. Is it important to your son?
You could try looking for math circles in your area. Those are social events where someone with math knowledge leads a group discussion about some interesting math with kids. They're very interactive and very social, as group work and discussing solutions is a big part of the circle. I attended them in high school, although I've seen them held for kids as young as in third grade. If you're interested in hearing someone else's perspective, I was always good at math as a kid, but I never went out of my way to do math until seventh grade. I started becoming more and more active with math until high school, when I spent most of my time doing math. Most of my friends were kids I met through math (ARML and MOP introduced me to a lot of people). If you want something to challenge him, you could check out Beast Academy by Art of Problem Solving. I've never read those books in particular, but Art of Problem Solving's high school books are considered to be some of the best high school level math books on the market--I used them as a student myself and now tutor kids for math competitions through them. If you're looking for more resources, searching up math circle activities for young children and trying to go through a few of those with him might help.
Schools can be tricky with gifted kids. Out of all my friends from high school (most of which attended gifted schools) who I know from math, none of them felt satisfied with their school's way of doing math. These were kids from different different states and cities, too. The standard approach by school's is to just push the kid up a year, especially in middle and high school. This doesn't work well, as your kid doesn't really care about learning math faster, he just likes puzzles and hard problems, but regular school math doesn't like doing anything even remotely difficult or interesting unless you get very lucky.
[ "Why is this wrong?" ]
[ "math" ]
[ "6wwhx0" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.42 ]
I had the following thought 9 is the largest 1 digit number 99 is the largest 2 digit number 999 is the largest 3 digit number This trend holds true for all multiple digit numbers, no matter how many digits 99<99.1 99.1<99.9 99.9<99.91 99.91<99.99 This trend holds true no matter how many decimal places are added Ergo, the largest possible number is an infinite string of nines followed by an decimal, followed by an infinite string of nines As such, infinity is capable of being defined as 99(repeating infinity).99(repeating infinity). This can be altered for any other number system by replacing nine with the largest single digit number I asked a math friend, and he said it was wrong, but he wasn't sure why it was wrong, so I ask, why is this wrong?
You are working with garbage if you start throwing an infinite strings of digits to the left of the decimal point. Every real number lies between two integers, so once you start writing stuff like ....999 (infinitely many 9's to the left as some kind of "limit" of 9, 99, 999, and so on), you are NOT working with real numbers anymore: such expressions do not lie between two integers.
Yeah, of course. You can't just write down a definition and assume it's well-defined without proof. If ...999.999... is a number, and place value arithmetic works the usual way, then we can subtract .999... from it to obtain ...999, which should also be a number. You can verify yourself that it's the additive inverse of 1, since all the carries make 0, so by the uniqueness of additive inverses, ...999=-1. And we subtracted .999...=1, so we see that ...999.999...-1=-1 which implies that the number defined above is 0. But zero is not the largest possible number, so this definition cannot possibly be valid.
As such, infinity is capable of being defined as 99(repeating infinity).99(repeating infinity). Gonna need a proof of that sir
A proof for a definition?
Repeated nines after the decimal point has a definition in terms of limits. Repeating left doesn't (unless you work with padics but then it will be finite and repeating after the decimal won't make sense).
[ "Scaling a Timeline" ]
[ "math" ]
[ "6ww710" ]
[ 3 ]
[ "" ]
[ true ]
[ false ]
[ 0.8 ]
I want to make a timeline of world history for my kids to add events to as they learn about them, I have a crazy idea to have it wrap all the way around the inside of my house: going in and out of rooms, the whole works. As I started to measure and think about what bit of wall would represent each year I realized something kind of obvious: more space should be given to more recent years and less space to the further past. But I don't want to arbitrarily or suddenly change the size of each year, I want them to steadily and evenly increase in size as I go. Any idea what math I should use to set it up? I tried making a spreadsheet where each year is longer than the one before it by the same ratio but that explodes in size too quickly, no mater how small I make the ratio. What am I doing wrong? (btw I'm planning on having it start at about 4000 BC and go to 2030 AD or so)
You probably want to use something roughly like a log scale, though maybe you want to at some point switch to a constant scale for the most recent events. First you need to figure out how much total length you have total and what kind of scale you want for the earliest and latest ends. Then you can try to fit an appropriate function to match those criteria.
https://xkcd.com/1017/ You could do something like this.
Wow, there is an xkcd for everything, I'll try it. It even links to a spreadsheet I can mess with!
The total length shouldn't matter though right? I figure once I get the proportions right I can fit it to any length, and if I can't get proportions to make sense, then I don't need to waste the time measuring my whole house. I thought what I was doing was roughly a log scale (each year equal to the last year times a constant)
Image Link Mobile Backward in Time People tell me I have too much time on my hands, but really the problem is that there's too much time, PERIOD. Comic Explanation This comic has been referenced 7 times, representing 0.0042% of referenced xkcds. xkcd.com xkcd sub Problems/Bugs? Statistics Stop Replying Delete
[ "How is SHA-256 not reversible? I.E. Why couldn't you find all acceptable values for bitcoin hashes by reverse-engineering the starting bytes?" ]
[ "math" ]
[ "6ww6lh" ]
[ 12 ]
[ "" ]
[ true ]
[ false ]
[ 0.88 ]
null
Hashing algorithms are not unreversable, just often designed to be computationally expensive to reverse. In most cryptographic situations, the gold standard is that the algorithm is designed in such a way that no technique to attack it is better than a straight up brute force search—which is computationally expensive and often infeasible. For hashing algorithms, databases of inputs are much more realistic than with encryption algorithms, and this is why plain hashes are not suitable for secure storage of passwords. Instead, one should use algorithms like scrypt or bcrypt, which are designed to be very computationally expensive for the express purpose of storing passwords.
/r/crypto is probably better suited to this particular question, however let me give it a whirl. You're right. You could. It would take multiple universes to do it though. SHA-256 has (ostensibly) 256 bits of output, which means there are 2 different possible outputs. 8x GTX1080's can perform about 23012MH/s, which is about 2 hashes per second. It would take 5.0317915e+66 seconds. For record, there's been about 4.3e+17 seconds since the beginning of the universe so it would take about 1.1701841e+49 of the time from the formation of the universe to now to brute force it.
If I understand your confusion correctly, you're asking why you can't just do each step of the algorithm in reverse to reverse the entire algorithm. If I start with A, do 10 computations, and get B, why can't you, given B, undo the 10th one, then undo the 9th one, and so on until you undo the first one and recover A? Assuming that's your thinking, the problem is that there are simple mathematical operations that are easy to compute in one direction and extremely difficult to do in the other direction. For example, what if I write down two prime numbers, and tell you their product. Can you recover the primes I wrote down? If they're small enough, maybe, but if they're both several hundred digits long you're never going to find them. An example that's more directly relevant to cryptography is the discrete log problem. Suppose I give you a large prime p, and a number b. I choose a secret number a from 1 to p, and tell you the value of b^a mod p. Can you use it to recover the value of a? Without the mod p, all you'd have to do is compute the base b logarithm of whatever value I told you, but with the mod p, if p is very large you're out of luck. The best you can do is just start computing successive powers of b and hope you eventually hit the right one.
Thanks for the clarification. How is it possible to design an algorithm with no method faster than brute force? Is O(n (algorithmic computational difficulty) vs. O(2 (brute force difficulty) for values of n less than 997 a valid structure in which to create a "gold standard" algorithm? Is my thinking along the right track with this?
When SHA256 is computed, the input message is first expanded from 16 32-bit words to 64 words. All the bits in the first 16 words have some influence on the bits in the later 48 words. So now you have this very large internal message. Each word of this expanded message is iterated over with a function that has a 256-bit state. The initial state is a constant. At each iteration, the new state is a function of the previous state and a word from the expanded message. At the end of the fuction, the internal state becomes the hash. So if you want to reverse this process, you need to work backwards. You have your desired final state, your known initial state (remember it's constant). All you need is the 64-word expanded message which is a function of a 16-word message. How do you go about finding those 16 words? You could start with a first guess. Work backwards to the beginning and compare your iternal state to the initial state. You'll probably have lots (maybe 50%) of matching bits. You want to keep those matching bits and flip the ones that don't match. You can do that by changing a few of the bits in your message. But the later 48 words of the expanded message are a function of the first 16 words. So now those later bits change. But you are working backwards. Any changes in the later part will affect the internal state near the beginning. But we didn't want to change the entire state, just those few bits that didn't match! A single change to any one part (message, expanded message, state) affects the other two. This is why it's so hard to invert hash functions.
[ "Help me screw with my buddy" ]
[ "math" ]
[ "6wvpfa" ]
[ 0 ]
[ "Removed - see sidebar" ]
[ true ]
[ false ]
[ 0.44 ]
null
Weak cheese. /r/homeworkhelp
Back in my day, we used to have to pay real money to cheat! You know, find the kids playing Magic cards and pay those bastards! This a poor attempt at getting a problem solved, youngin'
Not really, it should just be -11cos(x) - 8ln(cos(x))
Oh man, thanks. Just out of curiosity, is this a hard question?
I ain't lyin boss. Honest to god. School isn't even starting for me yet! I tried putting it in wolfram alpha and I have no idea if it's the right answer or not, embarrassing to say https://imgur.com/gallery/lOTvK here's some proof.
[ "What's the point of memorizing a bunch of integration techniques if most models/functions that describe real world phenomenons can only be integrated numerically?" ]
[ "math" ]
[ "6ww69i" ]
[ 18 ]
[ "" ]
[ true ]
[ false ]
[ 0.83 ]
null
Integration has a lot of uses in areas other than numerical models. Most of (theoretical) physics, for example, uses a wide range of integration techniques.
But when OP is not interested in theoretical physics why the heck do they still teach these things to him ??? He will learn something which he will never use. And as we all know, learning things is bad for your health!
Even in numerics, understanding methods of integration can be useful in making an algorithm faster. What I have found to be less useful is methods of solving differential equations (which is similar in nature).
To a mathematician, u-substitution, integration by parts and whatnot aren't just techniques to integrate specific functions, they're general and powerful theorems. For a very simple example, consider how you'd go about verifying that gamma(n+1) = n*gamma(n).
Pretty important for the theoretical aspects as well. In PDE and functional analysis the definition of the weak derivative is done through integration by parts.
[ "Question about convergent improper integrals." ]
[ "math" ]
[ "6ww1jn" ]
[ 4 ]
[ "" ]
[ true ]
[ false ]
[ 0.76 ]
Suppose you have an improper integral evaluated at zero and infinity with an integrand f(x) that is convergent. Again suppose that there is another convergent improper integral evaluated at zero and infinity with an integrand g(x). Let f(x) = g(x) at most a countable number of points. Is the above information enough to prove that the improper integral from zero to infinity of f(x)*g(x) is convergent?
We are speaking of integrals, not numbers. It's not always true that Int f(x)g(x) dx = Int f(x) dx Int g(x) dx, in fact that's very rare.
That won't fix it. Explaining in symbols is messy, but just think of the functions 1/sqrt(x) and 2/sqrt(x) on [0,1] and then on [1,2] make them smoothly go down to zero then stay zero from 2 on. The only fact like this I can think of that is always true is that if f(x) and g(x) are functions so that Int |f(x)| dx and Int |g(x)| dx both converge then Int f(x)g(x) dx also converges. This is one of the main reasons why we use L whenever possible.
No. Consider f(x) = 1/\sqrt{x} for 0 < x < 1 , and 0 otherwise. Let g(x) = 2/\sqrt{x} for 0 < x < 1 and 1/x^2 for x \geq 1 (and 0 at 0 ). Then these functions are only equal at 0 , but their product is 2/x on the interval 0 < x < 1 , which is not integrable.
/u/trent1inventor , if you want to learn more about this, I recommend looking up "integration by parts". It's a useful trick that tells you how to integrate the product of two functions, and its derivation/proof is pretty damn clever. Succinctly, integration by parts tells us that for functions u and v, Int u * dv = uv - Int v * du.
What if we add the requirement that both functions must be continuous along the entire domain of integration?
[ "is their a modernized version of Euclid’s Elements?" ]
[ "math" ]
[ "6wvcra" ]
[ 3 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
My math class is reading it and going over it in class. I'm having a bit of trouble reading the text in the propositions. any links and or recommendations are welcomed. Thanks.
Part of the problem is that some of the proofs in Elements are wrong/have hidden assumptions, as in the conclusions do not follow from the postulates. An example of this is discussed here .
If you don't mind a Java-based version, here's a nice interactive version: https://mathcs.clarku.edu/~djoyce/java/elements/elements.html
Wasn't the assumption at the time that the reader would actually draw the construction on a piece of paper? I get that it doesn't meet modern standards of rigor, but it also wasn't really trying to. "You are doing this on a piece of paper" doesn't even have the form of an axiom... but it does mean that that point is going to be there. It wouldn't have been thought of as something that needs proof because the reader can just look and see the point of intersection. Calling it "wrong" seems strong to me. It's not an example of a modern proof in the first place.
I think I saw an expensive Kickstarter version as a thread on here some time back - it's pretty nice but I can't find the link E: Found it lol https://www.kickstarter.com/projects/1174653512/euclids-elements-completing-oliver-byrnes-work
Not quite what you want but this is a YouTube channel going through the Elements.
[ "Great videos for a third grader who loves math?" ]
[ "math" ]
[ "6wv7s7" ]
[ 4 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
Hey guys, I'm having a bit of a tough time and I'm hoping someone who is an educator or just loves teaching children about how cool math is can help out. Since I just got my degree in mathematics recently I've been the 'math guy' at my job (which has nothing to do with math) and we just brought on a woman whose child is excelling in his math courses. The way she describes it, he's finishing his homework problems and asking for more when he's not watching Youtube videos or playing Minecraft. I'm leaving the job soon and I told her I would send him some cool videos before I go to keep that love of math going. She seemed really excited to hear that! After saying I would do that I realized my problem. I haven't dealt with someone that early on in their math career, ever. Normally people that I speak to usually have an idea of variables and things along those lines. Besides sending her some Khan Academy videos that have been directly related to problems she has had with his homework I haven't sent her anything yet. I don't really know what to send him on the 'cool' side of mathematics that would be accessible for someone just learning how to multiply a three digit number by a one digit number. I don't want to scare him off with a video on set theory or something lol. Maybe there's some interesting things that can be done in Minecraft? I don't know. Any help is appreciated! Thank you!
Maybe look through the videos by Numberphile and ViHart. Not everything on those channels is appropriate for elementary schoolers, but some of them would be.
PBS kids show Cyberchase. Enjoy! Ps. Gilbert Gottfried voices the bird!
Donald Duck in Mathemagic land. Pretty old but sparked my interest in math when I was young.
This was my favorite show as a child! I still enjoy watching it (though it does feel quite childish).
Thanks! I love the stuff from Numberphile but I will definitely have to sift through it a bit. Stuff like the rock paper scissors video though I'm definitely going to add. And ViHart has some great ones as well!
[ "Are there any applications of combinatorial Game theory?" ]
[ "math" ]
[ "6wv1rt" ]
[ 16 ]
[ "" ]
[ true ]
[ false ]
[ 0.79 ]
I am intersted in Combinatorial Game Theory as it is offered in my university but I have looked through the Web and haven't really found other applications besides computer science and games (like board games and such). Are there other applications?
Construction of the surreal numbers.
Ehrenfeucht–Fraïssé games and pebble games are used in mathematical logic to show that certain properties of mathematical structures cannot be expressed as formal sentences. For example, you can use them to prove that no sentence in first-order logic can distinguish connected graphs vs non-connected graphs.
I still don't get this.
Lexicodes are a class of error correcting codes that were derived from combinatorial game theory. They were developed by Conway and Sloane, among other game theorists.
Basically the surreal numbers are the part of the objects manipulable under combinatorial game theory that behave like the real numbers (for the most part). This is a gross simplification (it's possible to extend addition and multiplication to the full collection of games, but - for example - you can't guarantee that exactly one of a < b, a > b, or a = b are true), and really it's the kind of thing that takes at least a little bit of research to get used to.
[ "Don't Fall for Babylonian Trigonometry Hype" ]
[ "math" ]
[ "6ww2n9" ]
[ 393 ]
[ "" ]
[ true ]
[ false ]
[ 0.92 ]
null
But what about 1/4? That’s 0.25, which terminates, and yet Mansfield doesn’t consider it an exact fraction. And what about 1/10 or 2/5? Those can be written 0.1 and 0.4, which seem pretty exact. How does she figure this? As far as I can tell Mansfield never anywhere claimed or implied that such decimal fractions aren’t “exact”. I can’t imagine he would claim this, so if he said something that could be interpreted that way it was probably a mistake. I’m sure that Mansfield and Wildberger would agree that – in base 10 – any rational number which can be written as 2 5 , for , ∈ , can itself be written as an “exact” (terminating) decimal fraction, and can have its reciprocal likewise written as an exact decimal fraction. This includes 1/4, 1/10, and 2/5. Indefensibly, when he lauds the many “exact fractions” available in base 60, he doesn’t apply the same standards. In base 60, 1/8 would be written 7/60+30/3600 which is the same idea as writing 0.25, or 2/10+5/100, for 1/4 in base 10. Why is 1/8 exact in base 60 but 1/4 not exact in base 10? It’s hard to believe this is an honest mistake coming from a mathematician and instead makes me even more suspicious that his work is motivated by an agenda. I think these whole last 2–3 paragraphs are based on Lamb’s misunderstanding what Mansfield/Wildberger are saying, which makes it a bit unfortunate to go on about this in particular being “indefensible”. The point Mansfield & Wildberger are making about base 10 vs 60 is that base 60 works (quite a bit) better as a floating point division base if your main method for doing division is via lookup in a reciprocal table followed by lookup in a multiplication table, and if you want to only deal with base-regular fractions. You can certainly do the same (to any desired precision) in base 10, but you’re going to need longer decimal fractions vs. sexagesimal fractions. I’m hoping I can make some diagrams which visually represent the difference in efficiency in the near future. Arguably there’s no absolute reason to prefer terminating positional fractions vs. repeating positional fractions vs. fractions written as ratios vs. as a list representing a continued fraction vs. some other representation, but that’s a totally separate discussion. I’m not sure I personally consider anything about this whole affair “indefensible”, though the press hype has been a bit ridiculous compared to the usual response to papers/books analyzing Babylonian mathematical tablets, and a lot of the popular press stories have exaggerated or misstated the paper’s context or content. It would certainly be nice if journalists were appropriately skeptical about new interpretations of ancient artifacts people have been arguing about for 100 years, and at least tried to read the work they were reporting on, tried to talk to other experts in the field, and reported a bit better about the context of the work. But I’m not holding my breath.
I like this paper demonstrating that a Sumerian document calculates (4/3) for the purposes of compound interest. The English in the paper isn't the best though.
That reminds me of the medical researcher who "invented" the trapezoidal rule for estimating the area under a curve. It was discussed in this subreddit , stackexchange , and doubtless many other places a few years ago.
Wildberger (co-author of the paper) is a known crank. I don't understand why journalists who report on this don't take the five minutes it takes to research his name.
One of the problems in all this is that people who can read cuneiform don't typically know much about trigonometry. I once collaborated with a friend (now deceased) who was an Egyptologist. He was trying to decipher a math papyrus, with examples eerily similar to those found in middle school textbooks. (It dealt with stuff like bolts of cloth and- yes - pyramids.) Anyway, my friend knew absolutely nothing about the math.
[ "Wtf" ]
[ "math" ]
[ "6wua1h" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.14 ]
null
Because of the reason already given to you. Go to /r/learnmath . This is not a homework forum.
IT was asking for a simple calculation. Please see the sidebar, copied below. Homework problems, practice problems, and similar questions should be directed to /r/learnmath , /r/homeworkhelp or /r/cheatatmathhomework . Do not ask or answer this type of question in /r/math . If you're asking for help learning/understanding something mathematical, post in the Simple Questions thread or /r/learnmath . This includes reference requests - also see our lists of recommended books and free online resources. /r/askmath /r/learnmath
Did you read the sidebar?
It's either could or could , but never could . See Grammar Errors for more information.
It's either could or could , but never could . See Grammar Errors for more information.
[ "I made a complex function grapher - please tell me what you think!" ]
[ "math" ]
[ "6wubc1" ]
[ 39 ]
[ "" ]
[ true ]
[ false ]
[ 0.87 ]
null
Just use the Taylor expansions :)
I guess it doesn't do sines, cosines, exponentials, or logarithms?
Not yet, but I am planning on adding support for those functions. I wanted to get some feedback on the basic functionality before going any further :)
Could somebody please explain this to me? I'm an engineering student, my only math courses have been calculus through multivariable, and an ODE course. I know what complex numbers are, can do arithmetic with them, yadda yadda yadda. What I'm confused about is what the variable z means in this context. I assume it's a complex number, ofc, but what does it mean that if I just plug in z into the program, it gives circles? Some relation to Euler's identity and sin/cos?
Each complex number z corresponds to a point on the plane. If you do something to each point (i.e. if you f(z)) then you get another set of values. These may represent a 3d surface (e.g. a wave ) and if you don't want to plot a 3d graph you can represent the vertical displacements with shades and colours, like a contour map.
[ "Rationalizing radical fractions" ]
[ "math" ]
[ "6wtrsq" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.4 ]
Hey reddit math people, maybe somebody here knows the answer to this. In a review of some old stuff (reviewing for calculus) I ran into this question: Rationalize the expression and simplify: (root(4+h) - 2)/h The answer here was given as: 1/(root(4+h)+2) Anybody know why the bottom expression is considered rational and simplified and the top one is not?
Go to /r/learnmath . Also, probably a typo, I've never seen a source prefer radicals in the denominator. That said, it doesn't really matter so long as you understand how to manipulate fractions like that, and how to go from top to bottom or bottom to top whenever needed.
In Calculus you'll evaluate the expression as h goes to 0. In the first expression you get "0/0" as h->0, which can't be determined. In the second expression, you can see the expression goes to 1/4 as h->0.
It’s not the “best final version”, it’s just a particular way of normalizing the expression, which can sometimes make things easier or sometimes harder, depending on what you want to do with it afterward. As far as I can tell the main reason to always normalize such expressions in a class setting is so that teachers have an easier time figuring out whether you computed the right answer or not.
You'd want to go the other way, so that the numerator is irrational and the denominator is rational. The key observation is that if you have x +a, you can multiply it by x -a and remove the square root. Do this to numerator and denominator and you can get square roots in only one of them. It's just a manipulation trick though, not anything deep. It only works for square roots.
"Rationalizing" an expression, at least in high school math textbooks, does typically mean to rewrite the expression with the irrational part in the numerator.
[ "Anyone know any rigorous books that revisit elementary arithmetic/algebra?" ]
[ "math" ]
[ "6wtk6u" ]
[ 5 ]
[ "" ]
[ true ]
[ false ]
[ 0.79 ]
[deleted]
The first few chapters of Tao's Analysis I.
grundlagen der analysis by Landau
Mathematics for Elementary Teachers by Beckmann
Are there any prerequisites? Or is it like Number Theory - not many prerequisites but insanely hard?
Are there any prerequisites? Or is it like Number Theory - not many prerequisites but insanely hard?
[ "I need help understanding the series I created." ]
[ "math" ]
[ "6wrfmo" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
[deleted]
Your sequence (not series) only includes numbers with a finite decimal expansion. This misses a lot of rationals, not to mention every irrational.
0.333... will never appear in your sequence. Nor will sqrt(2). Your sequence contains numbers with as many digits as you want, but none that actually has infinite digits.
It's all good. As long as you don't title your post "why Cantor was wrong" people will usually help you.
It's doing all the numbers n*10 then n*10 then 0.1+n*10 ... can't that hit every number at aleph 0?
Yeah, true. Sorry for wasting your time.
[ "math for research in machine learning" ]
[ "math" ]
[ "6wqo9w" ]
[ 13 ]
[ "" ]
[ true ]
[ false ]
[ 0.85 ]
[deleted]
ML is mostly applied statistics, so that's probably the most important. Algorithms and Complexity is also useful, as it informs what is and is not learnable in various contexts. Numerical analysis/numerical linear algebra is also important, as the implementations of most ML algorithms boil down to doing computations on large (often sparse) matrices.
Advanced classes in Optimization, Statistics and Numerical Analysis
Manifolds (which I'm assuming is some kind of differential geometry course) and functional analysis aren't totally useless in machine learning, but their relevance largely comes from the areas where they intersect topics like probability, statistics, algorithms, complexity, and numerical analysis. If you've already covered all of that, you may get something out of other courses, but if someone who wants to do machine learning shouldn't take differential geometry over mathematical statistics, for example.
Advanced classes in Optimization, Statistics and Numerical Analysis Doesn't functional analysis also play a role in the theory of ML ?
so taking manifolds and functional analysis (something I'm quite interested in) won't be very useful?
[ "Why more physics can help achieving better mathematics" ]
[ "math" ]
[ "6wqc9o" ]
[ 45 ]
[ "PDF" ]
[ true ]
[ false ]
[ 0.85 ]
null
When linking to arxiv, please link to the summary page and not to the pdf. It would be greatly appreciated.
Abstract for the PDF avoiders: In this paper, we discuss the question whether a physical “sim- plification” of a model makes it always easier to study, at least from a mathematical and numerical point of view. To this end, we give different examples showing that these simplifications often lead to worse mathematical properties of the solution to the model. This may affect the existence and uniqueness of solutions as well as their numerical approximability and other qualitative properties. In the first part, we consider examples where the addition of a higher-order term or stochastic noise leads to better mathematical results, whereas in the second part, we focus on examples showing that also nonlocal models can often be seen as physically more exact models as they have a close connection to higher-order models.
https://arxiv.org/abs/1708.07735
I rather like this paper, we are always taught to use the simplest model possible but I hadn't considered that using higher order terms lead to more rigorous and perhaps broader solutions. Thanks for this, quite an interesting read.
Thank you for saying it. I really wish people would do this, especially since arxiv has such a useful landing page for each article.
[ "What is a proof?" ]
[ "math" ]
[ "6wq01a" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.64 ]
null
A proof is a proof. What kind of a proof? It's a proof. A proof is a proof. And when you have a good proof, it's because it's proven. Former Canadian prime minister Jean Chretien
And that's the point of the article, so much for reading the link before commenting on it.
You beat me to it. Obligatory video: https://www.youtube.com/watch?v=TLmUJCCKBTk
I'm pretty sure Youtube truncates when a video doesn't end perfectly on a second rather than adding a few milliseconds of silence. Really annoying because all of these short quote videos cut off the last couple of words.
It's okay, I did make the joke because it sounds like he's talking about proofs. I should've given context, it's one of the most famous blunders by a Canadian politician in recent history so I'm used to people around me knowing it (I'm Canadian).
[ "twin numbers and the relation with number 6" ]
[ "math" ]
[ "6wpytd" ]
[ 2 ]
[ "" ]
[ true ]
[ false ]
[ 0.58 ]
I have found a new characteristic of twin number that the sum of the twin number minus the sum of next twin number is always divided by 6 not only the number between the twin prime is divided by 6 , this applied to all twin numbers except numbers except of 3 - 5 and 5 - 7. example: 5 + 7= 12 and 11 + 13= 24 24 - 12 = 12 41 + 43 = 84 and 59 + 61 = 120 120 - 84 = 36 18406979 + 18406981= 36813960 and the next twin is 18407687 + 18407689 = 36815376 36815376 - 36813960 = 1416 is divisible by 6. The prime numbers aren't random. what do you think ??
This kind of stuff is so exciting until you take an intro number theory class and learn modular math lol
If p, p+2 are prime and p > 3 then p is odd and congruent to 2 mod 3. So p is congruent to 5 mod 6. So p + p + 2 is always congruent to 5 + 5 + 2 = 12 mod 6.
Modular math is awesome though. They reveal extremely deep relationships between numbers.
Take some number p, it can be written as the sum of a multiple of 6 and another number, so p=6k+a, where 0<a<6. If a=2, then p is even, so it is not prime, If a=3, then p is a multiple of 3, so it is not prime, If a=4, then p is even, so it is not prime. So a prime is either 6k+1 or 6k+5. Thus, twin primes are always of the form 6k+5, 6(k+1)+1, which implies that their sum is always a multiple of 6 (except with 3-5 of course).
All primes other than 3 and 2 are of form 6n ± 1. Thus, if you happen to find twin primes, one is 6n+1 and one is 6n-1. 6n+1 + 6n-1 = 12n. Six divides 12n.
[ "The ultimate sequence / function" ]
[ "math" ]
[ "6woc9q" ]
[ 0 ]
[ "Removed - ask in Simple Questions thread" ]
[ true ]
[ false ]
[ 0.41 ]
null
Well, it looks neat, but I gotta say I don't understand what it is you're trying to do with this. What does this function express? What makes it an "ultimate" function?
Can you figure out how to make it use e instead of cos? Do you know what the difference between a function and a sequence is? "Sequence" has a defined meaning, and I don't think it applies here.
It could be this: https://en.m.wikipedia.org/wiki/Euler%27s_formula
Non-Mobile link: https://en.wikipedia.org/wiki/Euler%27s_formula /r/HelperBot_ /u/swim1929
It's a function that can combine other functions together, you can combine many of them if you alter the function. Many more sequences can be generated through this method of combining sequences. The general term also doesn't include any imaginary numbers and whatnot I think that this make making functions a lot easier. YOu can use this to derive even better restriction and further customizing it into stuff like when 10 > x > 1 use this general term instead of that general term. and in any other place use that and that. ITs also very easy to derivitave and integral considering that you can just use the chain rule. I think it makes life easeir
[ "3,700-year-old Babylonian tablet shows Greeks did not invent trigonometry - \"not only contains world’s oldest trigonometric table; it is also the only completely accurate trigonometric table\"" ]
[ "math" ]
[ "6wogm2" ]
[ 0 ]
[ "Removed - repost" ]
[ true ]
[ false ]
[ 0.3 ]
null
Oh for fucks sake. Can we please stop posting this bullshit?
Discussion post: https://www.reddit.com/r/math/comments/6vu0ty/new_research_shows_the_babylonians_not_the_greeks/
You go ahead and feel free to not post links you find interesting, which have never been posted to this subreddit before. THAT's the way to improve the Internet! You've nailed it; congratulations.
There have been posts about this twice since friday, just the telegraph's version wasn't posted. It is also not the "only completely accurate trigonometric table," that's really misleading.
twice since friday hooooly shit that's more coverage than cnn gave muh russia
[ "Clarity regarding wind speed, geometric growth,. exponential growth" ]
[ "math" ]
[ "6wjzk9" ]
[ 2 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
[deleted]
The way I understand it: Geometric growth corresponds to the growth of a geometric sequence or series. As you may see, this looks like exponential growth, but it is typically used in discrete settings (though it's not unheard of to use it synonymously with exponential growth). Quadratic or polynomial growth in general should be somewhat clear what it means: f(n) = an (at least asymptotically) Exponential growth also should be relatively straight-forward: f(n) = Ca (at least asymptotically). As the wind gets stronger, the force experienced increased by a factor of 4. Is this geometric growth? You can't determine the sort of function (read: growth) from just 1 data point, even if you are only choosing between basic quadratic and exponential choices. It's the underlying functions - the ones I've given above - which matter here.
Is your Google broken?
To really get at your question, though, consider how the wind applies some pressure (force/area) proportional to its speed across the sail.
You need r/physics or someplace that specializes in aerodynamics.
The force is an extremely complicated phenomenon arising from the different possible behaviors of air as it flows around an object. However, for relatively high speeds, the force is approximately proportional to the square of the speed, i.e., it increases quadratically. An exponential (or geometric, which is the same thing) growth would imply that the force goes as a for some number a, which isn't the case.
[ "What Are You Working On?" ]
[ "math" ]
[ "6wkd5c" ]
[ 13 ]
[ "" ]
[ true ]
[ false ]
[ 0.94 ]
This recurring thread will be for general discussion on whatever math-related topics you have been or will be working on over the week/weekend. This can be anything from math-related arts and crafts, what you've been learning in class, books/papers you're reading, to preparing for a conference. All types and levels of mathematics are welcomed!
LPT: killing the kids outside your classroom is just as illegal.
Getting ready to pass my last two required courses for the bachelors tomorrow. It'll all be over in 15 hours, so I'm a bit nervous.
Not killing the kids in my classroom.
Trying to solve some exercises on Algebraic Topology. I'm struggling a bit (an understatement, I'm afraid). In one exercise, you need to prove a property of a direct sum of groups in which one of the terms is a quotient between the kernel of a group homomorphism and the image of another homomorphism, but those homomorphisms are actually between quotient groups, and those groups in the quotients are actually groups of formal sums of certain continuous functions to a topological space. I never worked with structures that intricate so it's not easy.
Writing up proofs for a couple combinatorial identities. ( These ones but generalized to noncommuting x'es.)
[ "Equation for simple math problem" ]
[ "math" ]
[ "6wjv2s" ]
[ 0 ]
[ "Removed - ask in Simple Questions thread" ]
[ true ]
[ false ]
[ 0.25 ]
null
thanks for the instruction. I was on a mobile and didn't see the sidebar. I have posted to ask /r/askmath
Read the sidebar.
Don't understand your reply. What sidebar?
Maybe you're on mobile or similar and cannot see the guidelines. Among those: If you are asking for a calculation to be made, please post to /r/askmath or /r/learnmath .
The sidebar of this sub.
[ "Algebra read left to right" ]
[ "math" ]
[ "6wke86" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.5 ]
h ( g ( f ( x ))) : x then f then g then h ( b – a ) : from a to b ; in vector spaces the vector going from a to b ( b / a ) : again from a to b ; in complex plane the scaling and rotation mapping a to b Essentially that is how I've come to read my equations upon deciding it insane to keep on the old way where everything about formulas is in reverse of its actual meaning. Noticed a professor doing it. DAE ?
The notation "f(x)" seems to come from the way we say (in European languages, at least) "the f of x" rather than "x's f" or "of x its f". This is a quirk of natural language, not of mathematical notation. People have tried turning the order of composition around (and writing xf or x instead of f(x)); it only ended up confusing everyone. xkcd: Standards .
Image Mobile Standards Fortunately, the charging one has been solved now that we've all standardized on mini-USB. Or is it micro-USB? Shit. Comic Explanation This comic has been referenced 4776 times, representing 2.8623% of referenced xkcds. xkcd.com xkcd sub Problems/Bugs? Statistics Stop Replying Delete
Herstein?
The most annoying thing is that commutative diagrams get written in the opposite way from the algebraic notation.
And thus, not wanting to meddle with how it is written, i thought of limiting myself to the way it is read. And reading right to left here and there would merely be a habit to form. I think it helps and that's that.
[ "Do you visualize maths in colour?" ]
[ "math" ]
[ "6whgmc" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.6 ]
When you guys visualize graphs, shapes and such in math, are they in colour? For me personally it's completely colourless, like not even grayscale; it's as though the colour "information" just isn't there. Would be interested in the different answers I get.
No.
Well when doing analysis I usually don't see green, but algebra has always had a distinctly not-red to it for me
Would you care to expand on that? How exactly do you not see math in color? What sort of colors do you not see?
I do see abstract things in color. For shapes, it's really any color as long as it's filled. For numbers though, there's a specific color for each going as follows: 1- black/dark blue, 2- beige/yellowish, 3- bright red, 4- blue, 5- bright orange (think Home Depot), 6- beige/gray, 7- brown, 8- yellow green, 9- black, 10- cyan/white, etc... often repeating colors. The same thing with months and days of the week. I'm not exactly sure if I have what is called 'synesthesia' since I don't literally see their colors in real life, but only when I think of them-- with the color sort of like a "background" for my "thought canvas". I don't really know a better way to put it. And for the record, it doesn't give me any exceptional mathematical abilities. In fact, I'm really sluggish when it comes to mental math.
As someone who is interested in graph theory, I visualise some graphs with colored vertices. This is because χ, the chromatic number of a graph, is very important. I have certain special cases, like the 3-coloring of the Petersen graph, memorised. So I always picture it in that color configuration. Other graphs, no.
[ "An interesting question about additive subgroups of R^(n)..." ]
[ "math" ]
[ "6wh7sg" ]
[ 2 ]
[ "" ]
[ true ]
[ false ]
[ 0.58 ]
Show that if any additive subgroup of R contains any non-degenerate interval (meaning the end points aren't the same), the subgroup is all of R. : State and prove, in as much generality as possible a similar result for R ; that is, a result of the form: "If an additive subgroup of R contains X, then it is all of R ." The second part is a little non-rigorously worded, but hopefully the meaning is understood. I found it interesting to try to find out what the most general possible condition was. Edit1: I wonder if you could do a more general thing for R involving representatives of R quotient Q..
There is also a generalization of this to structures other then R . Namely, the following is true: Any connected topological group is generated by any open neighborhood of the identity.
You can actually get that a connected topological group is generated by any open set just by translating.
Take a Q-basis B for R . Let B'={k/n : k ∈ B, n ∈ Z\{0}}. Call such a set a Z-basis for R . If an additive subgroup G of R contains a Z-basis, then G=R .
Is this an iff condition?
I think the converse statement should actually be: If G generates R then G contains a Z-basis. G need not be a subgroup. I don't think this is entirely obvious?
[ "I never understood fractions or decimals as powers until today, it was very simple but no one explained them to me" ]
[ "math" ]
[ "6wima9" ]
[ 402 ]
[ "" ]
[ true ]
[ false ]
[ 0.88 ]
so like lets say 5 is easy it's 5×5×5, but I always wondered how would I expand 5 it turned out I do this 5 × 5 × 5 × 5 which is 5 × 5 × 5 × √5 so it's simple 5 × 5 × 5 × √5 this maybe very simple and stupid to many but this is a life change to me, I hope that if someone had the same problem like me someday will find this post and actually benefit from it edit: added spaces between the asterisks since they made things italic instead of multiplication edit2: I put × instead of the asterisk because of formatting problems
If you ever plan to move to more advanced maths I suggest dropping the "power as repeated multiplication" idea as soon as possible. Don't get me wrong: it's really useful when starting off, but it quickly becomes very limiting, just like with multiplication which we start off thinking as repeated addition. Yes, it might work with integer numbers, and by tweaking a few things with real numbers, but what about complex numbers or algebraic expressions? If you want to look at a new way to approach powers I highly suggest this video the whole channel is really great for people who want an intuition of maths Edit: to avoid having to repeat this to every commenter I didn't express myself well enough, what I wanted to say is that repeated multiplication is only one intuition which, while useful, is very limiting in many situations. Analysis (where that intuition is really useless) is by far my favourite field of maths which is why I said it in such an extreme way
3Blue1Brown is a gift to humanity! I especially love his linear algebra playlist.
All right. And from here how does one proceed in actually calculating √5? What about 5 Let alone 5
The pedagogical point is that once you have a notion of "nth roots" and "exponentiation is repeated multiplication" there is only one sensible way to define exponentiation of rational numbers. Once you have "exponentiation of rational numbers" and "exponentiation is continuous" there is only one sensible way to define exponentiation of real numbers. Once you break a complex number in to a radius and an angle and state that multiplication of two complex numbers multiplies the radii and adds the angles, the same is true for exponentiation of complex numbers. Each level of complexity takes the information from the previous level and extends it in some way but we still start with exponentiation is repeated addition. If we define exponentiation from the beginning as a power series or something the intuition of basic exponentiation is completely gone.
3.5 = 7/2 , so 5 = 5 = √( 5 )
[ "3d Vector to orientation" ]
[ "math" ]
[ "6wfxb2" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
So I was thinking the other day about rotation, and I thought of a way to convert from a vector to an orientation: rotate around the axis of the vector by it's magnitude, for example, say you have a vector[0,0,1], you get a rotation of 1 radio around the z axis, because the vector goes along the z axis and has a magnitude of 1. I thought this might have a name and a nice way to be calculated(vector to matrix and back), but I couldn't find one.
what you're doing is something like "take the cross product with the vector [0,0,1]". It's true that picking a cross-product on R is basically the same as orienting it for the reason you've given. But remember that we could orient it the other way, e.g. use the left-hand rule instead of the right-hand rule. To put it in your terms: how do you know whether to choose the clockwise or counterclockwise rotation around the axis? That's the choice that orients the vector space.
I actually just wrote a little article today for my website on using quaternions to do rotations in R https://adamsturge.github.io/Engine-Blog/mydoc_rotations.html
https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula
By 1 radio do you mean 1 radian? Anyway, if you represent a rotation by a quaternion and then take the logarithm, you get basically this. But note that you probably want to use [0, 0, 1/2] as your vector to represent a 1 radian rotation, because the way quaternions are used for rotation is via a kind of sandwich product, basically two half-rotations whose other side effects cancel, so you need to use half the final angle you want. To get to a rotation matrix, first exponentiate to get a quaternion. In the case of [0, 0, 1/2], that would be the quaternion [cos 1/2, 0, 0, sin 1/2]. To get from there to a matrix representation of the rotation, see https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation#Conversion_to_and_from_the_matrix_representation
Here's an example how this is widely used. OpenGL is industry standard for graphics and 3d modeling software. You typically have a coordinate system for the object of interest and then matrices to define the transformations needed to get to the screen point of view. http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
[ "TI-82 vs T8-84 for Algebra II" ]
[ "math" ]
[ "6wfl2q" ]
[ 0 ]
[ "Removed - see sidebar" ]
[ true ]
[ false ]
[ 0.4 ]
null
I have a TI-82, I don't know what this guy's talking about. It graphs, because it's a graphing calculator, though he might be talking about the Ti-82 , which I think doesn't graph. But you'd know if you were getting a TI-82 Stat. I took Algebra II last year, my TI-82 worked fine. The only thing my calculator didn't have that it needed was rref(), but I just copied this program from online into my calculator, and it worked perfectly. If you need trig functions as well, know that inputting degrees/minutes/seconds works like this on later TI calculators: 128°12"34' But on the TI-82, it's entered like this: 128'12'34' Otherwise, I've never run into any problems. If you can afford it, get an 83 or 84 -- it's easier on you, because you don't have to figure out the small differences yourself. If you're on a budget, a TI-82 will work, but you'll have to be a little more creative.
I have a TI-82, I don't know what this guy's talking about. It graphs, because it's a graphing calculator, though he might be talking about the Ti-82 , which I think doesn't graph. But you'd know if you were getting a TI-82 Stat. I took Algebra II last year, my TI-82 worked fine. The only thing my calculator didn't have that it needed was rref(), but I just copied this program from online into my calculator, and it worked perfectly. If you need trig functions as well, know that inputting degrees/minutes/seconds works like this on later TI calculators: 128°12"34' But on the TI-82, it's entered like this: 128'12'34' Otherwise, I've never run into any problems. If you can afford it, get an 83 or 84 -- it's easier on you, because you don't have to figure out the small differences yourself. If you're on a budget, a TI-82 will work, but you'll have to be a little more creative.
Most teachers don't allow phones during testing tho.
You can't graph with the TI-82. Ti-84, TI-89 or TI-nspire-cx are recommended if not required for Algebra II.
Is the graphing calculator mandatory? If yes, then I recommend getting a ti 84. When I was in algebra 2 we rarely used graphing calculators. Does your teacher have a class set? If yes, then then you don't need it, and at home for homework you can just use https://www.desmos.com/calculator I recommend a ti 84 if you plan to go beyond algebra 2, you'll for sure need it for trigonometry or ap statistics and calculus. Also the ti 84 I believe is more capable. Also check out eBay, you might be able to get some used ones for a good price.
[ "Numbers where changing the base permutes the digits" ]
[ "math" ]
[ "6wfy77" ]
[ 85 ]
[ "" ]
[ true ]
[ false ]
[ 0.89 ]
This shows that 238 changes to 283 when changing the base from base 11 to base 10. Are there any other numbers like this? Edit: I thought of a way to generalize this. You could have more than one permutation and more than one base change which would create a huge path connecting different numbers together.
For b>2, 21 base b is 12 base 2b-1
Yes, there are 272 such numbers less than a million and 4808 less than 100 million, starting with 1 to 9, then 196, 283, 370, 1723, 4063, 7587, 8665, etc. There can only be finitely many such numbers in total though, because all sufficiently large numbers have more digits in base 10 than base 11. EDIT: corrected 238 to 283, so that all numbers listed are in base 10.
nice
are there finitely many still if you allow any combination of bases though?
You can just exclude the identity permutation and ask the same question.
[ "Question about the Rubik's Cube" ]
[ "math" ]
[ "6wfllw" ]
[ 35 ]
[ "" ]
[ true ]
[ false ]
[ 0.83 ]
null
Read this: http://mathworld.wolfram.com/PermutationCycle.html . Express the effect of a sequence of moves on a Rubik's cube as a permutation on the little squares. Take the cyclic representation of that permutation, and the lcm of the cycle lengths is the order of the permutation.
Proving that, eventually, repeating a sequence will return to its original state might be doable in the two weeks. What? I can do it in one line. There are finitely many Rubik's Cube states. If f is your sequence of moves, eventually f (starting position) will equal f (starting position). f is invertible, so take f of both sides min{n,k} times. ∎
Since there's only a finite number of positions, eventually you have to get a repeat. The harder part (but not too hard) is to show from there that the first repeat is the initial stage.
His second sentence addresses exactly that issue. In words, the idea is that if the starting point weren't on an orbit then there must be a point x later on that is the point in the orbit. But this contradicts invertibility of f, since x must be reached by both a point in the orbit and a point outside the orbit.
You don't need very much group theory to solve this. Write down the effect of the sequence in cycle notation, then computer the order of the element. PM me if you have any questions.
[ "Advice on newbie tutor, who is about to tutor kids in middle/high school math." ]
[ "math" ]
[ "6wfbeu" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.5 ]
[deleted]
Note, I'm not critiquing the original poster but I would like to mention that this is actually more important than you think, to those downvoting. When creating my advertisements for tutoring (on Facebook, fliers, etc). I've had numerous parents give me the following advice.. Most parents want a really good tutor for their kids: would you want someone who doesn't know proper grammar to tutor your child or someone who doesn't. If they don't care enough to learn or use proper grammar why would they take care in ensuring my child understands the material incredibly well. While /u/Donnakebabmeat was a little bit condescending with his comment it is a really important point for getting tutoring jobs.
I'm a college senior who has been tutoring for years! Some tips: Ask the student how they'd go about solving a problem, and talk it out together. If they REALLY don't know where to start, try referring to an example to show them a method, and then try that method together. Think out loud with the student. Say, "usually when I see these types of problems, my first thought is to isolate X (or whatever). What is a method we can use to start doing that?" Ask them to summarize a process after a few similar problems. You don't have to know the answers (or even how to do a particular problem). Rely on your arsenal of strategies--those can often be more important than the knowledge. If you have their books or materials, use those to give you ideas. The important part is being able to pull from the methods you know, and explaining why you're choosing a method.
I know you are 'Math' But you could start by using their, instead of there. Just saying.
Also don't forget to not talk too fast. If showing a method you are familiar with or you are excited to get to the end of a solution just remember to make sure to look and see if they are still following. Ask lots of questions. Instead of saying 'now we need to divorce by 5' say 'what do you think we should do with the 5 now? Ok and why? What would happen if we multiplied instead?' Etc.
I've tutored quite a bit. My advice is teach them how to problem solve, approach new problems, find information in their book, notes and online, study, and think about math in a way that will help them in future classes. Don't be a homework answer machine! Show them cool math stuff to get them excited. Have them follow their intuition through problems even if it's not the most elegant way and then after show them some other approaches.
[ "Why aren't mathematicians able to each how do do mathematics?" ]
[ "math" ]
[ "6wctxj" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.5 ]
null
I don't think this is true. There are a number of books aimed at teaching how to transition from plug-and-chug mathematics (up to, and including intro calculus and maybe intro linear algebra) to mathematical reasoning (proofs and such). Many university curricula include a course on mathematical reasoning. Some do it as a standalone course, others do proof-based linear algebra, and still others roll it into the first course in algebra or analysis. Math reasoning isn't easy to teach or to learn, but the assertion that mathematicians explicitly do not teach it is absolutely false.
I disagree. Someone who has worked through Rudin as well as a rigorous linear algebra brook should do fine on that exam.
I disagree. Someone who has worked through Rudin as well as a rigorous linear algebra brook should do fine on that exam.
A professional mathematician is, by definition, someone who is good at math whose been doing math for a long time. If you're good at something and you keep doing it you will get better. If a high school senior took an algebra 1 final, he'd pass it easily. A high school freshmen might find it difficult. Why is that? Because the freshmen is learning this for the first time, but the senior has used algebra for the past four years in their math classes and has gotten better at it.
There's books like "How to Solve It" by Polya which cover this.
[ "Online graduate courses for credit?" ]
[ "math" ]
[ "6wd6q1" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
[deleted]
Beginning math graduate courses in the US are often undergraduate courses in Europe. Talk to faculty in your department to learn more about the differences and get advice from them.
Agree with /u/chebushka about the discrepancy between US and Europe. To answer your question, whether it's justified or not, most people would turn their noses up at an online graduate credit, if you can even find a reputable one.
I'd love to know the logic for a single one of those downvotes beyond people taking it as a dig at the US.
I can't speak about all areas of math, but its faculty in number theory is well known to people who work in number theory. Considering your username, perhaps that is helpful.
I can't speak about all areas of math, but its faculty in number theory is well known to people who work in number theory. Considering your username, perhaps that is helpful.