title
sequence
subreddit
sequence
post_id
sequence
score
sequence
link_flair_text
sequence
is_self
sequence
over_18
sequence
upvote_ratio
sequence
post_content
stringlengths
0
20.9k
C1
stringlengths
0
9.86k
C2
stringlengths
0
10k
C3
stringlengths
0
8.74k
C4
stringlengths
0
9.31k
C5
stringlengths
0
9.71k
[ "What exactly is so exceptional about exceptional groups that they got this title?" ]
[ "math" ]
[ "8q7tb0" ]
[ 20 ]
[ "Image Post" ]
[ true ]
[ false ]
[ 0.8 ]
null
The term "exceptional" here is being used as in the sense that they are "exceptions" that do not follow a general pattern, not that they are amazing.
I think it is just the fact that families An, Bn, Cn and Dn are infinite, while families En, Fn and Gn are not. For instance, there is no E9. One could say there is a E5, but actually it is D5 (if I am not mistaken), so no need to label it twice.
Is this.... Is this loss???
My question is exactly about the patterns that they violate. I couldn't find the definition of exceptional group. Sorry for poorly worded and ambiguous question.
This is a classification of the simple Lie algebras (given by Dynkin diagrams). After n=8, the Lie algebra of type En is no longer simple so the small values of n which keep En simple are “exceptions”. There is a Lie algebra of type E9 but it isn’t simple so it isn’t in the list. Similar statements are true for types Fn and Gn.
[ "10 years after using it, I finally understand the intuition behind L'Hôpital's rule." ]
[ "math" ]
[ "8q9aq3" ]
[ 590 ]
[ "" ]
[ true ]
[ false ]
[ 0.96 ]
Succinctly, l'Hôpital's rule states for real, differentiable functions that at a point a, if f->0 and g->0, then (f/g)->lim_a f'/g' if lim_a f'/g' exists. It had always seemed like black magic, because I was never explained what it meant graphically. The idea is actually pretty straightforward, once explained. Unfortunately, in my early calculus education, the notion of limits was never used rigorously. Lately, though, I was going through baby Rudin for pleasure and took a bit of time to develop the intuition to follow the proof. So basically what's going on is that near the point a, well-behaved f and g functions start to look a lot like linear functions that pass through a. So let p(x-a) approximate one and q(x-a) approximate the other. So near 0, f/g simply looks like (p(x-a))/(q(x-a)), or p/q, which are exactly their derivatives near a. Of course, a rigorous proof (with fewer assumptions about f and g) is more involved, but I'm no longer symbol-pushing and hey, presto, magic theorem on one of the more intuitive parts of mathematics. There's a counterpart to the theorem that deals for the case where g->\infty, but intuitively one assumes the same thing at infinity, or consider the limit to zero case with the functions (1/f) and (1/g) instead. edit: Thanks for spotting an error in my statement of L'Hôpital's!
I had a very similar eureka moment about L'Hopital's rule myself. It also seemed something like black magic to me. I understood the proof, but I did not intuitively get it. One day I was thinking about the problem of how to convert mouse sensitivity in FPS games when you change your field of view. Like if you have sensitivity s and field of view theta, and you change the field of view to theta', what should be your new sensitivity s' be so that it "feels" the same. It turns out the answer I was seeking doesn't exist because the mapping from screen position to mouse position is non-linear, but one possible way to convert your sensitivity is to make "very small" movements feel the same. So if f(theta, x) maps screen positions at FOV theta to angle phi from the center, and m=phi/s is the amount you need to move the mouse, we solve f(theta, dx)/s = f(theta', dx)/s', so s'=s*f(theta', dx)/f(theta, dx). Well for dx=0 this is obviously a 0/0 problem, but I also realized that f(theta, dx) ~ (df/dx)(theta, 0)*dx, and then the dx's would cancel and I could easily solve it. Then I realized that this was exactly L'Hopital's rule. I hadn't intentionally used it. And then I understood, truly understood, why L'Hopital's rule worked.
Your statement of L'Hospital's rule at the start is incorrect, since you omitted the of the limit of f'/g'. You refer to the lack of a rigorous treatment of limits when you first learned calculus, but it is unnecessary to use limits rigorously to see where L'Hospital's rule comes from: near x = a, the formula for tangent lines says f(x) is approximately f(a)+f'(a)(x-a) = f'(a)(x-a) and g(x) is approximately g(a) + g'(a)(x-a) = g'(a)(x-a). Thus if g'(a) is nonzero, f'(x)/g'(x) near x = a is approximately f'(a)(x-a)/g'(a)(x-a) = f'(a)/g'(a). This is not a proof of L'Hospital's rule since that rule is valid even when f'(x) and g'(x) are not continuous at a. Even those calculations are unnecessary to get intuition: the most basic physical example of a function and its derivative is position s(t) and velocity s'(t), and it should be intuitively obvious that if two cars are driven along parallel roads a very long distance, with the first car having velocity twice that of the second (ratio of derivatives), then in the long run the first car will be traveling twice as far as the second (ratio of distances) even if they did not start at the same spot (any initial gap in their distances will wash out in the long run).
you omitted the existence of the limit of f'/g' True, this is a necessary part of the statement. jhanschoo: let p(x-a) approximate [f] chebushka: f(a)+f'(a)(x-a) = f'(a)(x-a) Did you both just assume f(a) = 0 and g(a) = 0? f'(x)/g'(x) near x = a is approximately f'(a)(x-a)/g'(a)(x-a) = f'(a)/g'(a) I feel like you and OP are saying the same thing since p = f'(a) and q = g'(a). Even those calculations are unnecessary to get intuition: the most basic physical example ... Your example makes sense for lim (driving a "very long distance"), but I don't see how it helps with lim at all.
I never thought of that! There should be more posts like this one.
More easily, l'Hopital's rule is an expression that the quotient of two functions can be viewed as the quotient of their taylor series expansions. In the limit toward a point, the ratio of the highest order terms will remain on differentiation.
[ "Is there really such a thing as a “hard” course/book, or rather is the correct adjective “badly written” or “directed at the wrong audience”?" ]
[ "math" ]
[ "8q80pv" ]
[ 17 ]
[ "" ]
[ true ]
[ false ]
[ 0.75 ]
This may be a controversial opinion, but it’s one that has solidified in my mind over the course of my degree, looking at other people. Like everyone else, I have had courses that I have found tough, sometimes even intractable. Looking around, they’ve been courses where most students have been utterly lost, or who have basically gone into a sort of “streamlined” mode, where they essentially choose to only take in a very bare version of the course - because to them it is that or nothing at all. I used to be of the opinion merely that these were hard courses. That some courses are simply irreducibly difficult, and that that is just a fact that we have to try to deal with. However, I now believe something almost entirely in the opposite direction. Looking back at these courses, now, they often seem far far easier. Looking at these courses through different textbooks, at the time, I would sometimes luck out and find one that made the course look almost simple. I do not believe the former was just it being easy in retrospect - often it simply feels like this course would have been fine to take now, and was simply at a different level to others presented at the time. I now believe, mostly, that there really is no such a thing as a hard course. Courses are either simply badly presented, presented to an audience that is not yet at the right level to receive them, or presented without sufficient time (a course covering 2/3 of the material in the same time would have been fine). There are some caveats to this, however: This is specifically talking about students at the level of those at my university, since I do not really have any experience of those at others. My university is quite highly ranked, and very selective, so I know that there aren’t any ‘dumb’ people, here. In fact, even those getting lower grades are some of the most naturally clever people I have ever known - and some of the most hardworking. You could not really get a cleverer and more hardworking class if you tried. If a course is presented which half the students cannot get at all, then basically no class in the world of the same level would fare better. Therefore, I feel it’s fair to criticise the course in this situation. It is also worth saying this probably mostly applies only to undergraduate courses - I could believe that as things near the cutting edge they do become difficult to everyone, but are too useful to dismiss. This is an opinion that some people I have talked to have agreed with entirely and wholeheartedly - both students and professors. Some, however, have vehemently disagreed, though with reasons I have found unconvincing. What is your opinion? to clarify, this is mostly directed at courses where it is the material itself that is the issue - where people cannot follow the lectures, and when they go away and study they cannot understand the notes or textbooks, either. Hard exercises are I think a different issue - a big problem with these courses is they often have artificially easy exercises, actually.
This is true to some extent, and often books and courses do a poor job meeting students where they are (if half the students are completely lost something is certainly going wrong...), or sometimes even intimidate students into dropping out of studying math altogether, but even without changing anything else a course can be made arbitrarily hard or relatively easy through selection of problems. If students are willing to put in the time and work, the kind of problem sets that take 20 hours to finish 4 problems are going to be 'harder' and also teach a lot more than the kind of problem sets that take 2 hours to finish 20 problems. Of course, calibrating the former kind of problem sets to be appropriate for all of the students in the course, and helping the less well prepared students to keep their heads above water could be a challenge. Personally I think we could do a lot better if we got rid of grades, but anyway...
To an extent I agree, but the courses I am specifically talking about are ones where the material itself is the problem. One specific example was a course on algebraic curves - some of the material itself was found fiendishly difficult by most, but almost everyone was fine with the the problem sets because they specifically avoided the more troublesome parts. This doesn’t feel okay to me. People were not actually getting the course at all - they were just able to get away with it because every question was basically written to not require any understanding. An example would be that people were given questions where they could apply Riemann-Roch, despite basically no one actually understanding any part of the proof, and with many not even understanding what a canonical divisor was. They just plugged in the statements; essentially anyone with a basic understanding of logic could have done them.
Math pedagogy has always been a problem and very elitist this is something I've sort of picked up on when I took my first in-person math class after self-studying for awhile. beyond just the teacher some of the students were just complete asshats. there was a lot of faux shock "omg you don't immediately see this solution? omg you don't know this random theorem from this field of math you've never heard of that actually makes the solution completely completely uncomprehensible???? IMPOSSIBLEEEEE" which was super fucking irritating. Professors were a mixed bag depending on the course being taught.
Math pedagogy has always been a problem and very elitist this is something I've sort of picked up on when I took my first in-person math class after self-studying for awhile. beyond just the teacher some of the students were just complete asshats. there was a lot of faux shock "omg you don't immediately see this solution? omg you don't know this random theorem from this field of math you've never heard of that actually makes the solution completely completely uncomprehensible???? IMPOSSIBLEEEEE" which was super fucking irritating. Professors were a mixed bag depending on the course being taught.
Just went through a phd course. Lectures were good and almost everyone had nearly full attendance (voluntarily), book by the lecturer available online for free was good, there was ample information about the oral exam with over a month to prepare. Yet still 50% failed. I think that means the course was hard.
[ "The issue with the Monty Hall Problem" ]
[ "math" ]
[ "8q6m6s" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.33 ]
[deleted]
Your four scenarios are not equally likely, so calculating probability by counting outcomes is wrong. The "solution" to this problem is that you should always change your answer due to statistical probability. When you first made your choice, you had a 1 out of 3 chance of picking the car. If you switch, you are now making the decision 1 out of 2. This is also wrong: if you switch, you win 2 times out of 3. https://en.wikipedia.org/wiki/Monty_Hall_problem
OP is actually correct in that quote - at the point when you're deciding whether to switch doors or stay, you're choosing 1 out of a possible two doors, so looking at that choice in isolation, you have a 50% chance of choosing the winning door, and a 50% chance of choosing the losing door. However, you are correct that when you switch, you win 2 times out of 3.
That is a possible reading, but I don't think it is what OP meant. In fact, I expect it is another case of the same mistake: failing to distinguish between counting outcomes versus probabilities.
So, looking at your four scenarios, you pick door #1 in 2 of them, and doors #2 and #3 in one each. Does that mean that there's a 50% chance you picked door #1? Why were you more likely to pick that door than either of the other doors? The issue with your reasoning is that . Unfortunately quite a lot of people come away from their high school math classes believing the following "fact" If you have N possibilities, the chance of any one of them occurring is 1/N. This might sound reasonable at first, but it is actually completely false. The correct statement is: If you have N possibilities, the chance of any one of them occurring is 1/N. If the first one were true, the chances of me winning the lottery next week would be 50%. Obviously that isn't true. Once you realize this distinction, the "paradox" of the monty hall problem basically goes away. In the Monty Hall problem, you have two choices: The doors were picked in completely different ways, so there's no reason to think that they should have the same probability of containing the car, so there's absolutely no reason to think that each one should have a 50% chance of containing the car.
As you can see in my reply to /u/Brightlinger 's comment, you are correct in thinking that at the point of choosing whether or not to switch, you have a 1 out of 2 chance of making the correct choice. However, that's not the entire problem, and what is important is the fact that on your initial choice of door, you have a 1 out of 3 chance of making the correct choice, and a 2 out of 3 chance of making an incorrect choice. You have two decisions when playing Monty Hall: (1) Your first choice of door - 3 choices, and (2) Whether or not to switch doors after 1 goat has been revealed - 2 choices. Since you have 3 choices in the first decision, and 2 choices in the second decision no matter which choice you make in the first decision, there are a total of 6 possible scenarios you can play out (there are 6 unique paths in your decision tree): But there are 3 possible choices for the winning door, so the game can unfold in a total of 18 unique ways (the game tree has 18 unique paths). If we consider what happens for each of the six scenarios above, with the winning door being door 1, then door 2, then door 3, then here's what the wins and losses look like: As you can see, you win 2 out of 3 times when you switch, because 2 out of 3 times, your initial door choice was not the winning door. And you win only 1 out of 3 times when you don't switch, because 1 out of 3 times, your initial door choice was the winning door. If this explanation still somehow leaves you unconvinced of switching being the better strategy, I highly recommend building a program to simulate this game. I recently had a moment of skepticism myself about this problem, and built a simple Python simulation (about 25 lines of code), and the simulation clearly shows switching wins twice as often as not switching.
[ "Is there a term for when you mechanically apply the rules of differentiation rather than differentiating without functional form?" ]
[ "math" ]
[ "8q6shn" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.2 ]
For example in economics we have derivations such as Shepard's Lemma and Slustky equation that are true regardless of any particular utility or cost function. In examples textbooks give utility and cost functions to verify results where you mechanically apply differentiation to arrive at the results. I searched online for this but wasn't able to get anywhere
more specific mathematical terminology to distinguish between the general case and a specific example I think regular English does the job just fine, so there isn't any actual mathematical terminology needed. It appears this is really just a matter of substituting a specific functional form you might encounter in the real world as opposed to leaving it generic. In your first link, it wouldn't make much sense to specify the function since the results should hold in general. But in the second, the goal is to build up practice actually differentiating specific functions.
It's not clear to me what you're asking. Using a specific function rather than proving the general case is called "using an example". If you mean something else, I'm not sure what.
I just thought there would be more specific mathematical terminology to distinguish between the general case and a specific example. For example in this context on page 2 the chain rule is used in a theoretical way whereas here the chain rule is used in a mechanical way applying the rules of differentiation
The title made me think about differential algebras .
Non-Mobile link: https://en.wikipedia.org/wiki/Differential_algebra /r/HelperBot_ /u/swim1929
[ "Trig question about latitude and angles." ]
[ "math" ]
[ "8q5q4p" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.33 ]
I work in the satellite tv business and need to take weather into account when regarding signal strength. My question is if a satellite is in geostationary orbit and a dish is at a specific latitude, how do i calculate the latitude at which the signal going to that dish crosses a specific altitude, namely the altitude of a thunderstorm?
the satellite is at fixed (geostationary) altitude above the equator at longitude corresponding to its name (ie if it's astra 19.2E then it's at longitude 19.2E over the equator). use spherical coordinates. it's easy to calculate the angle between the line connecting you (your dish. latitude and longitude are relevant) to the satellite and the surface of earth at your location (approximate it with a plane and project the connecting line onto that plane). then you can follow the connecting line to the altitude of the thunderstorm and see if you're affected or whatever. draw a sketch.
Unfortunately, im not so awesome at math so im at a loss on how to calculate those things.
ah well as the other user says you will already have the azimuth and elevation anyway from some table. most of what i described would calculate this and the last part would use the angle to calculate the distance of the cloud. i think you're good. otherwise i would have made a sketch when at home describing it a bit better
You probably just want an approximate answer: Take your latitude and divide by 0.9, to get the angle of the satellite from directly overhead. So if you're at 45 degrees latitude, you'd get alpha = 45/0.9 = 50 degrees. (This accounts for the geo satellite's altitude above the Earth. If it was a star above the equator, you'd skip this step.) You might already have this angle, since you'd need it to know where the satellite is to point at it. (If you have the angle above the horizon, you'd use 90 minus that angle). Then take the height of the thunderstorm in miles, and multiply that by tan(alpha) to get how far south the thunderstorm is that you're worried about. This assumes the satellite is close to due south, not sure how accurate that is. Then divide that by 69 to get how many degrees that is (1 degree change in latitude is about 69 miles). If you're at 45 degrees latitude, and the storm is at 50,000 feet, or about 10 miles, you'd have 10 * tan(50 degrees) / 69 = 0.17 degrees latitude difference. Not a big change.
So could i not divide by 69 so that i get the max distance in miles that the storm would effect the signal?
[ "3 years of a bachelors in Maths!!" ]
[ "math" ]
[ "8q8eeb" ]
[ 888 ]
[ "Removed - see sidebar" ]
[ true ]
[ false ]
[ 0.93 ]
null
And here I was in America taking gen eds for half of my degree
It pisses me off so much. In Europe the students come in knowing more math, and then immediately start taking real math classes. Here we come in knowing less, than require 60 hours of B.S. to graduate.
It's in the UK, so there are some fairly significant cultural things that need to be mentioned: 1) People enter the university mostly having studied nothing but maths and related subjects (physics, etc.) for the last two years, and on a much more standardised curriculum, so there's a much higher level of understanding that can be assumed at the start. 2) People go to university to study maths, and study nothing but maths. "General education requirements" are not things that exist.
It's really bugging me that the first two binders are out of order.
+ we usually graduate without debt
[ "Fake proof for deriving the surface area of a sphere" ]
[ "math" ]
[ "8q586j" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.21 ]
[deleted]
It will be (2πr)(r)(π)(π). Simplify and you get 4πr . No I don't.
Steps 4 and 5 seem very unintuitive, what exactly are you doing in them?
THANK YOU, okay that is very helpful. It means that this method is on to nothing.
Rotating? The object in 3 can’t be rotated to get 4?
Rotating by π in both steps. and the entire path that it takes is marked down.
[ "Geometry and topology in statistics" ]
[ "math" ]
[ "8q41t8" ]
[ 9 ]
[ "" ]
[ true ]
[ false ]
[ 0.85 ]
I read a paper today about how algebraic topology can be of use in statistics when there are certain geometric requirements and this kind of stimulated my imagination. More specifically this paper went into how persistent homology can be used for kernel density estimation, with conditioning on the support of the density. I am now curious, are there any other areas where geometry and statistics meet? I know of information geometry where statistical manifolds are studied, and I know of manifold learning, but is there more? Is there anything of this kind being done in current research, besides topological data analysis?
Not sure whether it counts, but hyperbolic geometry is useful in data visualization and social network analysis (see our work and some references here , though it needs updating). EDIT: updated the page (but not the downloadable version yet).
Hyperbolic geometry is one of those things I never really got around to looking at, so I'm going to kill two birds with one stone and have a look at this later. Danke!
Algebraic geometry has applications in statistics.
https://arxiv.org/abs/1712.04487
I've recently stumbled upon Tropical Geometry of Deep Neural Networks .
[ "Can someone with an IQ of 110-115 excel at Math?" ]
[ "math" ]
[ "8q4u4v" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.26 ]
This question might appear a bit controversial because you hear alot of people say that IQ is not a real measurement of intelligence. However, there are others that say that there is a relationship between IQ and people that major and are successful in STEM field majors. I personally work with a mathematician that has a PhD with an emphasis in number theory and the guy is just brilliant. He seems to grasp things very quickly, struggle at all with complex concepts, and frankly I just feel dumb around him whenever hes explaining these concepts to me. lol. I know its a bit controversial but I feel that genetics play a huge role in what you can accomplish in life. For example, no matter how hard a 5'5 basketball player works, theres no way he will ever make it in the NBA. This all due, to his genetics. I feel that this also applies to the physical brain. My IQ came in at 115. I took the IQ test locally from a certified psychiatrist and these were the results. My logical and critical thinking skills were sub par. All this is worrisome to me because I am interested in getting a degree in chemical engineering and this is a very math intensive major. Not too long ago, I also heard Sam Harris and Jordan Peterson talk about their thoughts on IQ. They are both of the opinion that IQ does play a major role in determining whether you will be successful in these fields or not. According to them, some people can try as much as possible to succeed in these fields, but no matter how hard they try, they will fail. In fact, they say that telling someone with a low IQ to pursue a degree in these fields, is unethical, and that one should be honest with these people with giving life advice. What do you guys think? Is my IQ too low to pursue a degree in chemical engineering? Heres a link to the Podcast where Sam Harris talks to Charles Murray about the subject for those of you that are interested.
IMO, Sam Harris, Jordan Peterson, and Charles Murray are all pseudointellectual hacks. YMMV.
Yes, but unfortunately someone who listens to Jordan 🅱️eterson and Sam Harris can't
Intelligence correlates with many measures of success across a population; for individual cases, it's not that useful of a predictor. Your conscientiousness, organizational skills, academic resources, and personal interest in the subject are significantly more important in determining whether you could pursue that degree than your IQ, and most of the relevant ways it affect your prospects are better measured through more direct means.
As someone in a PhD program right now, I can say first hand that IQ plays a small role in how our program turns out. You should be more worried about work ethic and mental health.
If you want to be in the NBA of chemical engineering - ie, a leading researcher in the field - then yeah, you're unlikely to succeed without a really high IQ. If you just want to get a degree and a decent job, no. It's a bachelor's degree, dude. It's not that elite. In fact, they say that telling someone with a low IQ to pursue a degree in these fields, is unethical, and that one should be honest with these people with giving life advice. You don't have a low IQ. You are a full standard deviation average.
[ "When will the Fields medalists be announced?" ]
[ "math" ]
[ "8q3kt2" ]
[ 18 ]
[ "" ]
[ true ]
[ false ]
[ 0.85 ]
[deleted]
I believe the public announcement will be at the ICM itself on August 1st. The winners have likely already been privately told. They will all be giving talks on their research at the ICM.
I highly doubt it.
I highly doubt it.
As someone who worked in the city of Haribo I can just give insider infos that one of the most guessed person will get one.
Bobby Flay?
[ "What are some applications (in any field) where one would want only largest positive eigenvalues of a symmetric matrix?" ]
[ "math" ]
[ "8q2ukc" ]
[ 11 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
null
I'm not a graph theorist, but I believe that there are lots of connections between properties of a graph G and the spectrum of its adjacency matrix . In particular, the difference of the largest two eigenvalues is "related to the expansion of G" (full disclosure: I don't know what that means).
Check out principal component analysis. It's used all over and you'll find lots of info. I've used some ideas from PCA in the context of optimizing models for nuclear cross section calculations(collisions involving the nucleus). In my case, we were using Bayesian statistics and metropolis Monte Carlo to combine theoretical models and experimental data while correctly propagating uncertainty. My code was dealing with large covariance matrices which are symmetric and positive semidefinite. In this case(IIRC), the largest eigenvectors tell you the directions in parameter space which account for most of your models variance and hence contain the most information. By cutting out the parameters which have small eigenvalues i.e. have little bearing on your optimization, you reduce the dimensionality of your problem which helps save on computational costs. Matrix diagonalization isn't cheap! O(N ) in the worst case. In general, PCA is very important to kernel methods in modeling...think machine learning!
Oh man, how oddly specific...there is a paper I just read about electron transport through single molecules(for molecular junctions--this is my area of research) that delved into this topic. Here, the adjacency matrices associated with the graphs (organic molecules) were used to write out expansions for the Green functions, which basically give you the information you need to calculate many key properties while correctly taking into account many forms of quantum interference(kind of a big deal in most systems).
Suppose you have a system of n linear differential equations. It can be written du/dt = M u where M is an nxn matrix. Then at large time the solution grows exponentially, exp (lambda t) where lambda is the largest positive eigenvalue. (For almost all initial conditions). This is hugely important, for example in determining the stability of any system in physics, chemistry etc.
I don't want to sound like an asshole, but did you read what's on the other side of the link? It's with sign.
[ "Advice for a younger, interested person" ]
[ "math" ]
[ "8q2mva" ]
[ 8 ]
[ "" ]
[ true ]
[ false ]
[ 0.84 ]
I am currently studying GCSE level maths and I'm pretty good at it but what would you advise I should start with if I were to learn stuff myself besides my curriculum. I'm new to this subreddit and was just looking at the top posts and I literally didn't understand anything.
You should read Letters to a Young Mathematician
I think you'd probably want to go through A Level content first before you begin stuff on this subreddit which is usually undergrad/postgrad stuff. However a few areas you might wish to begin with Set Theory Proofs & Logic Modular arithmetic All of these are fairly accessible at GCSE level. However, grasping some of the ideas in the topics listed above will be more difficult than GCSE content. Also, do realize that certain concepts just require time to click.
Genuinely made my day, the fact that a random stranger cares.
Thank you for the effort and I'll definitely start this once my exams are over
You'll be surprised how much you can learn if you put in the work
[ "Should I take applied Calculus 2 or regular Calculus 2?" ]
[ "math" ]
[ "8q2g03" ]
[ 0 ]
[ "Removed - see Career & Education Questions thread on front page" ]
[ true ]
[ false ]
[ 0.45 ]
null
I would prefer a doctor that spent his time actually studying things related to medicine instead of calculus.
You're ridiculous. Would you want a math professor who, 15 years ago, decided not to run a marathon?
Would you want a math professor who decided to take the easier biology during their undergrad?
Do you really want a doctor who took the easier math?
doesn't really matter though does it? There's a difference between not being able to do something and not wanting to spend the time to learn it
[ "[CONTEST] - The Lowest Unique Integer Game" ]
[ "math" ]
[ "8q5n66" ]
[ 496 ]
[ "" ]
[ true ]
[ false ]
[ 0.95 ]
Hello, ! Welcome to my little game theory game/experiment. The game is as such; I have a google form below. It's easy. Put in a and your reddit username in said form. Easy as that. The winner will be the person who submits the LOWEST UNIQUE INTEGER. Here's what I mean by that; after two weeks, I will go through the submissions, and delete any repeat usernames, any invalid responses, as well as any accounts under a week old. Then, I'll look for the lowest number that exactly one person submitted. The winner will be the person who puts in the lowest number that nobody else submitted. Sounds easy, right? It is. I'm putting in 50 dollars to the winner. All data INCLUDING REDDIT USERNAMES will be shared to this subreddit two weeks later. By submitting, you're acknowledging that you're okay with me sharing your number and my findings to the subreddit. I hope to be able to rerun this contest every once in a while. Show your true Game Theory chops and see if you can outwit the rest of the subreddit. That said... Have fun, and good luck. You may comment whatever you like or discuss the contest to whatever quantity you like. To spice this up, . The winning number was 91. The numbers 35 and 75 were not chosen last time, but every other number between 1 and 90 were chosen at least twice with over 1500 submissions. Feel free to share and spread this to other subreddits and other communities to get as large a sample as possible. I'm very interested in seeing what the least common numbers will be this time!
That's a great idea! I put in 1 also and if either of us win we'll split with everyone!
I chose 4
It says integer on the form, before anyone gets too excited. There goes my submission of -[(Graham's Number)↑↑(Graham's Number)]-1
I think I'm most interested in finding out if the results will be similar to the last game more than anything. Will knowing about the results of the last game change the distribution of numbers picked? So many variables to consider.
I was going to take the ultrafinitist position and chose the smallest integer.
[ "Does Calculus and Limits solve Zeno's Paradox or does it simply define it away?" ]
[ "math" ]
[ "8q1s7b" ]
[ 12 ]
[ "" ]
[ true ]
[ false ]
[ 0.65 ]
[deleted]
Zeno's paradox asks, if we move toward some fixed destination by covering half the remaining distance at each step then will we ever reach our destination? The idea of a limit of a (Cauchy) sequence dissolves the confusion in this question by offering a precise notion of what we mean by "ever reaching the destination" which generalizes to cases like the one where remaining distance keeps being reduced by half. You're correct that the sum 1/2 + 1/4 + ... + (1/2) does not equal 1 for any finite value of n. So, in the usual sense of two real numbers being equal, it is not the case that the distance we cover will at some point be equal to the starting distance. But a different notion of equality makes sense of things: we say that for any positive value of ɛ, no matter how small, it only takes a finite number of steps for our total distance covered to get within ɛ of 1. In this sense we will reach our destination in finite time, as long as we think of our destination as an arbitrarily small neighborhood of a point rather than a single point.
It's not Zeno's Paradox, It's Zeno's fallacy. This is what you get when you try to fit the reality into tautological, unitless concepts. The first thing that you should notice is that the mathematics doesn't deal with any units. There are no centimeters, meters, there is no time or any other aspects of the physical reality. The whole Zeno's fallacy "works" on presumption that if you increase the number of samples of the distances already travelled by the two actors - then you will somehow slow down the time or lower the speed of the actors, at the same time. But you are not changing time nor the speed. You are just increasing the number of measurements or discrete calculations of the distance travelled (if you want to run the simulation on your computer) after each and every step. So - the second is still a second, the speed is still the same as it was, the only thing that changes is the frequency of "distance probing", the number of samples that you take that is ever-increasing ad infinitum within the whole time period of the achilles/turtle run. Or, in other words - the fact that your computer won't be able to finish the calculations because it will run out of memory/storage trying to cut the whole distance travelled into infinite number of infinitesimal pieces while testing the distances for convergence by looking at the same column in a matrix after N steps (where N goes to infinity) - doesn't affect the Achilles and his ability to outrun the turtle or his ability to finish the run itself, since the "proximity" of the columns after N steps of calculations isn't the same as measuring the distance after M predefined, constant units of time.
It depends on your philosophical beliefs. I'm not particularly educated in philosophy of math so I don't have a well-formed opinion, but I would agree with that statement. The construction of the real numbers and all its associated pathologies seem very artificial to me, maybe because they're so contrived yet so useful. This isn't the only opinion on the matter, so I hope someone gives a more detailed answer to this question.
It depends on your philosophical beliefs. I'm not particularly educated in philosophy of math so I don't have a well-formed opinion, but I would agree with that statement. The construction of the real numbers and all its associated pathologies seem very artificial to me, maybe because they're so contrived yet so useful. This isn't the only opinion on the matter, so I hope someone gives a more detailed answer to this question.
It depends on your philosophical beliefs. I'm not particularly educated in philosophy of math so I don't have a well-formed opinion, but I would agree with that statement. The construction of the real numbers and all its associated pathologies seem very artificial to me, maybe because they're so contrived yet so useful. This isn't the only opinion on the matter, so I hope someone gives a more detailed answer to this question.
[ "Markov chains, stochastic processes and dev comms (from a comms grad student)" ]
[ "math" ]
[ "8q12p1" ]
[ 4 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
Hey folks, this may end up being a simple question for but I have a Comms degree so what the hell do I know. Context is development communication (Not programming, I'm talking about communication activities that are needed to improve community infrastructure, social and economic stability, etc. This is social science/comms leaning heavily on math) Chime in if my thinking is erroneous: In this context, Markov chains can be viewed as multi-step flow communication models. Markov chains can contain multi-output nodes (ex. Mass comms in this context) Communication is undoubtably a stochastic process because message encoding/decoding involves a degree of randomness due to individual perspectives (even if infinitely small). Markov chains can be designed to accommodate stochastic processes. Can Markov chains include nodes that are both stochastic and deterministic? In this context: People communicate conscious and subconscious messaging. Here, people are being viewed as markov chain nodes. Would they not be stochastic and deterministic? Again, I am not a math person. These are my conclusions taken from the reading I've done, they may be completely wrong. Please enlighten me you beautiful math wizards!
Markov chains can be designed to accommodate stochastic processes. Not sure what this means. For mathematicians, "Markov chains" refer to certain types of Markov processes, so a Markov chain a stochastic process.
I don't think it is insane to use model communication with Markov chains. It could be a useful model for some purposes. One of the key properties of Markov chains is that it is "memoryless", that is, the "future state is only dependent on the most recent information". You may want to think about how reasonable it is for communication to be "memoryless".
Something you might want to investigate further is Hidden Markov models (HMMs). In these models, the states of the Markov process are "hidden" or unobservable. This allows the states to be used to model abstract concepts. For instance, there could be a state representing "community is socially stable" and one "community is socially unstable". Then the hidden states can be stochastically related to things that are observable, like the number of community events in a certain period. To parameterize all this, you'll need to choose:
Indeed, if you have a lot of real data (a lot being the key) you can learn both the transition probabilities in section 2), and the emission probabilities in section 3). Look up something called the Baum-Welch algorithm which is a particular adaptation of the Expectation Maximization algorithm for learning HMM parameters. If you do not have real data, you do have to guess the parameters using intuition and then do a lot of testing/adjusting to get a good model. I think the more you dig into it, you may become disappointed with how approximate all these stochastic modeling methods are. There really is no such thing as an exact model. A good model will reasonably accurate and most importantly it will be simple.
Thanks! The model is a distant thought right now. I need to check some of foundational ideas first. To the lecture link!
[ "What's the Monkey number of the Rubik's cube? New video on solving twisty puzzles with random turns by Mathologer" ]
[ "math" ]
[ "8q0w0j" ]
[ 58 ]
[ "" ]
[ true ]
[ false ]
[ 0.88 ]
null
Pretty sure that must be it. All those 2, 4, and 6 step solutions must make a huge difference.
What's the intuition for the Mean Time from Equilibrium being greater than the Mean Recurrence Time? Is it because the mean recurrence includes all the cases where the cube doesn't get very scrambled?
You can actually have odd length solutions as well, if you're using HTM (half turn metric)! Though, you can't in QTM (Quarter Turn Metric). As every quarter turn does a 4-cycle of corners and edges, the corners and edges will both have odd parity if you've done an odd number of quarter turns, meaning the cube cannot be solved. Being able to use parity-related arguments in QTM is why it's often preferred for mathematics. E: In case you're interested, this is also why you can't just swap two edges on a 3x3 cube. Doing so would involve the corners having even parity and edges having odd parity, which is impossible. You can, however, swap two edges and two corners .
Yes, exactly
I formed a graph of the 24 configurations, placing edges where a valid turn brings you from one config to another. Writing E for the expected number of turns to get to "solved", then E(solved)=0 and E(a)=1+mean(E(b)), where the mean is over the nearest neighbors b of a (there are six for the 1/4-turn metric, and nine for the 1/2-turn metric). This gives a big system of equations which may be solved for each E(a). If each starting config is equally likely, then we're after the average of all the E(a). Looks like this method (at least without some other insight) would be rough for tackling the 2x2x2...
[ "If you had to show one paper to someone to show that your subfield is beautiful, what would you choose? (assuming they're equipped to understand it)" ]
[ "math" ]
[ "8q14ta" ]
[ 328 ]
[ "" ]
[ true ]
[ false ]
[ 0.95 ]
null
On the Number of Prime Numbers Less than a Given Quantity by Bernard Riemann. I don't really think of this as quite my subfield, but it's close enough. The study and usage of L-functions, including the Zeta function, to prove things about primes is really important.
Deformation quantization of Poisson manifolds by Maxim Kontsevich contains a of good stuff.
The one that I wrote . But actually actually, I think Ribet’s converse to Herbrand’s theorem is a great paper. It’s short, but the method is really cool and it highlights how important modular forms can be in algebraic number theory. Edit: A good summary of Ribet’s method is the hexagon on the second page of this which is also a good read!
I understand about 20% of this. The word 'of'.
That would be Rudolf Kalman's introduction of his recursive estimation filter to the world.
[ "An interesting topology question" ]
[ "math" ]
[ "8q0uym" ]
[ 10 ]
[ "" ]
[ true ]
[ false ]
[ 0.85 ]
I was thinking a little about this today - if two subspaces of a topological space X are homeomorphic, it is not true in general that X\A is homeomorphic to X\B. But is this true for some cases? So here are two seperate, but very related questions: What are the topological spaces X such that if Y is any topological space, and f and g are any two embeddings of Y into X, then X\f(Y) is homeomorphic to X\g(Y)? What are the topological spaces Y such that if X is any topological space, and f and g are any two embeddings of Y into X, then X\f(Y) is homeomorphic to X\g(Y)? Any comments/thoughts would be appreciated!
Certainly any nontrivial manifold is in neither class, due to existence of knots at every dimension. I would guess that finite spaces would be in both classes. Given that there is no classification theorem for topological spaces, I wonder whether there is any reason to expect there to be any description other than the one given.
It could be helpful to ask this question about (nodes and edges) instead of Topological spaces. (Or more generally Fraisse structures, some of which are topological spaces.) For example, the Rado graph (I.e. countable random graph) will have your desired property for finite subgraphs. That is, all co-finite subgraphs of the Rado graph are isomorphic (via a back and forth argument). You might be able to push it a bit further, but you won't get this property for all subgraphs. It's for the same reason that the real line doesn't have your property: you can embed R in itself so that the complement is connected, or alternatively so that the complement is disconnected.
Assuming to you "X\Y" means the complement of Y in X, there are no (nonempty) spaces satisfying 2. You can always do the following: Assume Y is such a space, further assume Y is connected. (If not, just do some version of the construction below componentwise) Choose a point in Y. Attach two closed intervals to Y at that point (and topologize this space appropriately). At the other end of one of the intervals, attach another copy of Y. Removing the first copy of Y will result in a disconnected space, removing the second will not.
Oh right. Yeah finite spaces don't work either.
Hmm finite spaces are a no I think. For example for number 2, if Y is a point, then the embeddings of Y into X = {1, 2, 3, 4} with topology {empty set, {1, 2, 3}, {4}, X} produce different spaces upon removing Y. Also, for number 1, if the topology of a finite space isn't discrete or indiscrete then they can't serve as X either for similar reasons. Btw, I can see how knots mean they can't be of class 1, but can't manifolds potentially be of class 2?
[ "Does an infinite number of infinitely small objects weigh nothing or an infinite amount? (Density being a function of volume.)" ]
[ "math" ]
[ "8q0sbm" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.3 ]
null
are you describing an integral? Not all integrals are zero. Some are zero, some are finite, some are infinite.
You have to define infinite and infinitely small a little more carefully. Typically this is done through and integral.
integrals are just infinite sums of infinitesimals Well, limits of finite sums of finite values. and they almost always have answers as real numbers, not infinitesimals In fact they have a real number rather than infinitesimal value if they don't diverge, which is unsurprising given that infinitesimals don't exist in the standard construction of analysis.
They weigh nothing. Weight requires mass and gravity. Since neither the mass or gravity of the system have been defined, they weight nothing. More technically, they have no weight as an attribute.
It depends.
[ "Approximating Pi ( Monte Carlo integration ) | animation" ]
[ "math" ]
[ "8q04we" ]
[ 125 ]
[ "" ]
[ true ]
[ false ]
[ 0.9 ]
null
Conceptually and visually, this (standard) setup is nice, but it misses something important in more practical examples, which is that a big goal in Monte Carlo integration is reduction of variance. In this example, you want to say that pi=E[X], where X is 4 times a Bernoulli(pi/4) random variable. The variance of that Bernoulli is pi(4-pi)/16, so the variance of the approximation of pi is pi(4-pi)=about 2.7. The standard deviation is then about 1.6. That's bad: you want the standard deviation of your approximant to be about a third of your target error tolerance (so that your error tolerance is achieved with ~99% probability), but the standard deviation of your approximant only goes down as 1/sqrt(n). So for example, if you want an error tolerance of 0.1, the standard deviation of your approximant needs to be about 48 times smaller than the standard deviation of the individual terms, which means you're looking at about 2300 samples. You can see this in the OP. For example, one frame approximates pi as 4(379/500), which is off by about 0.11. The standard deviation at this stage is about 0.07, so you have a deviation of about 1.6 sigma, which is no surprise! By comparison, if you uniformly sample on [0,1] and evaluate sqrt(1-x ), we again get an approximation of pi/4, but the standard deviation for the approximation of pi is about 8 times smaller. Thus the same level of accuracy is going to need about 60 times fewer points, at the cost of computing some square roots. You can also use importance sampling. For example, starting from [; \pi/4 = \int_0^1 \sqrt{1-x^2} dx ;] you could instead look at [; \pi/4 = \int_0^1 \frac{2 \sqrt{1+x}}{3} \frac{3}{2} \sqrt{1-x} dx. ;] This suggests the algorithm "average 2/3 sqrt(1+X) where X has PDF 3/2 sqrt(1-x)". You can achieve this by setting X=1-U where U is uniform on [0,1] (this trick is called the "probability integral transformation"), and then you want to average values of 2sqrt(1+X)/3. This cuts the variance by about a factor of 10, which means you need about 10 times fewer points, or 600 times fewer points than you needed with the OP approach. The cost is now computing a 1/2 power and a 2/3 power each iteration (plus the cost of doing the calculation above by hand).
In that case you're looking at a sphere of volume 4pi/3 embedded in a cube of volume 8, so pi is 6 times the expected value of a Bernoulli(pi/6), compared to 4 times the expected value of a Bernoulli(pi/4) in 2D. This makes things worse: the variance of the approximation of pi is pi(6-pi) compared with pi(4-pi) for the 2D version. It might better because the variance of the approximation of pi/6 in 3D is smaller than the variance of the approximation of pi/4 in 2D, but that's apples to oranges.
Depends on how much accuracy you need and the generator you're talking about, to some extent. But in most cases you're going to be shorter on time than on entropy. As a result, you're generally looking at an accuracy of maybe 10 times the standard deviation of the underlying variable (which will take something like 90 billion samples to get with 99% probability) before your system is just taking too long. Where things get hairy is when you go to parallelize. Under naive parallelization, the independence assumptions that are crucial for this kind of thing to converge break down completely (the threads become correlated). So it is delicate to make parallel MC simulation work.
It's the average value formula for integrals, which falls out if the mean value theorem. Usually you see it as avg(f)= int(f)/(b-a), but if you multiply by b-a, you get the average value times the width of the interval, which is equivalent to the integral of that constant value on (a,b)
Beautiful.
[ "Intuitively, what does it mean to take a L^p norm for p > 2?" ]
[ "math" ]
[ "8pzx4l" ]
[ 19 ]
[ "" ]
[ true ]
[ false ]
[ 0.85 ]
I get the L "taxicab" norm, basically going from point to point as if the space is defined as a rectangular grid. I get the L "Euclidean norm", good ol' Pythagorean theorem generalized for n-dimensional space. Useful for finding the distance between points. But what is the goal/meaning of our action when we take the L norm, or the L norm? And while we're at it, what about 0 < p < 1 (quasi-norm)? I'd like to understand, any help appreciated.
A locally integrable function or sequence may fail to converge and be in a Lebesgue space for two different reasons: the horizontal asymptote, it may fail to go to zero fast enough on sets of infinite measure; or the vertical asymptote, it may fail to go to zero fast enough near a singularity or place where the function is unbounded, on neighborhoods of arbitrarily small measure. Raising the p value of the norm makes the horizontal asymptote functions more convergent, and the vertical asymptotes less convergent. For example, 1/x on [1,∞) is in L but not L because of its tail. Lowering the p value makes the vertical asymptotes more convergent, and the horizontal less convergent. For example, 1/x on (0,1] is in L but not L . Which means that if your measure space doesn't have any areas with neighborhoods of arbitrarily small measure, then it's a sequence space (ℓ ) and you don't have any convergent vertical asymptote functions, and there is an inclusion of spaces L in L for p < q, since raising the p value captures more functions/sequences with horizontal asymptotes and we don't have to worry about losing any vertical asymptote functions cause there aren't any. Similarly if your space doesn't have any neighborhoods of arbitrarily large measure, then there is an inclusion L in L , because lowering the p value captures more vertical asymptote functions, and we don't have to worry about losing any horizontal asymptote functions, cause there aren't any. If the space has neither small measure nor large measure neighborhoods, then it is a discrete space and L space is finite dimensional. We have both of the inclusions L in L as well as L in L . Hence the spaces are equal. The norms are equivalent. All functions convergent with respect to any p-norm are also convergent with respect to any other (they are all finite). If you want an intuitive picture of your norm, it may be useful to visualize norms in n-dimensional via their unit circles, and if you do so then I guess "squircle" is the right answer for the L norm as u/DavidSJ points out. Or a higher dimensional analogue ("sphube"???). But that only works in finite dimensions, and in finite dimensions there is no need for a different norm. Only in infinite dimensional Lebesgue spaces do we gain anything by changing the p value of our norm. I don't think the unit sphere picture is helpful here. I don't have a visualization for unit spheres in infinite dimensions. Not even with p=2. I don't even have a visualization for the Pythagorean theorem. So that leaves us with the answer: higher p is there to give us a Lebesgue space to study functions with less convergent tails. Lower p is there to give us a Lebesgue space to study functions with less convergent poles.
Perhaps you've seen this already, but in the plane you can visualize how the unit circle depends on the norm, interpolating between a diamond (l ), a standard circle (l ), and a square (l ). In the case 2 < p < ∞, you have a generalized squircle .
But you're just talking about topology here, a norm gives you geometry as well as topology, and the finite dimensional L spaces exhibit different geometry for different p values. For instance, even in finite dimensions, the unit sphere in L is strictly convex if and only if p is in the open interval (1, infinity). I know for a fact that in finite dimensions, if you want to do gradient descent with a norm regularization term in your energy (this shows up in data science, for instance), people really care about L versus L regularization, because L regularization promotes sparsity of mass distribution versus L
That's a special property of the plane. In 3D and above the L and L norms aren't the same. In 3D in particular, the unit ball of the L norm in 3D is an octahedron and the unit ball of the L norm is a cube.
Yeah the L unit ball is always a hypercube and the L unit ball is always an orthoplex . Edit: Also the fact that the hypercube is dual to the othroplex is directly related to the fact that L and L are dual (in finite dimensions, in infinite dimensions the dual of L is L but the dual of L is something much bigger than L ).
[ "Inspired by Ireland and Rosen - I found an interesting inequality with Pi(x)" ]
[ "math" ]
[ "8pzs54" ]
[ 3 ]
[ "Image Post" ]
[ true ]
[ false ]
[ 0.72 ]
null
Minor formatting note: You can write log n as $\log n$ so it doesn't appear in italics. Note that your new term in fact is much larger than log(n)/log(4). Define s(x) as the sum of all the primes which are at most x. It follows from PNT with a little work that s(x) is asymptotic to x /(2 log x). One thing that puzzled me when I first saw what you have here, is that it looked too strong while using essentially no properties of the primes, and worried that something must have gone wrong, because you would be able to use the last term to bootstrap to a much higher order of growth. However, this turns out not to be case: the entire argument you used would go through for many other sequences, such as for example if one replaced primes with powers of 2, in which case the sum at the end once one divides by n is then about constant. So, the upshot is that there's little actually involving the primes here, other than essentially the use of Chebyshev's Lemma/Bertrand's Postulate to conclude that the new term is positive.
This in turn suggests that maybe it should be ln instead of log ? If the base of the logarithm isn't specified it is safe to assume it is e, at least in mathematics.
It looks fine. It should be noted that π(x) > log(x)/log(4) is a "pretty shitty" inequality in the sense that π(x) grows much faster than log(x) does. If you're interested, this lower bound will be asymptotic x/2log(x) (because the sum over primes ≤ x is asymptotic to x /2log(x)) so it is a much better lower bound than log(x)/log(4).
I don't know if it is interesting; I haven't seen it before, but that's a very low bar. It might be interesting to run this in a different direction, and do the same thing with any given sequence rather than just the primes and ask what sort of growth information would this allow one to actually use this sort of thing to bootstrap to a higher growth rate. My guess though is that the answer is that there won't be many natural contexts where that occurs. Could you maybe suggest something similar I could research/work on? Honestly, it seems like you are already thinking in the right sort of ways about things at this point. And when I was your age, I definitely wasn't at the point where I was reading Ireland and Rosen . My mental model for algebraic number theory is something like ordering textbooks in order of difficulty/deepness: the very basic (e.g. Ore's "Number Theory and Its History") then the slightly higher level (e.g. Hardy and Wright) then books which require a small amount of abstract algebra or analysis (e.g. Apostol's Introduction to Analytic Number Theory, and Ireland and Rosen), and then the more serious stuff (e.g. Lang's "Algebraic Number Theory" ), and I'd generally expect I+R to be somewhere towards the end of undergrad or very beginning of grad school. In terms of actual research, it is very hard to find level appropriate problems, and most basic properties of primes are pretty well-understood. Unfortunately, most of the problems that are of a level close to this that I could reasonably point one to that I'm aware of are problems which I'm currently hoarding for my own students, or which I'm in the process of writing up results, or are problems where the underlying motivation is a bit too deep for the problem itself to have a satisfying motivation without a lot more work. But if you ask me again in a month the situation may be different.
Yeah, at the start I wanted to use x/(2log(x)) instead (and essentially use the same argument), but then I would reach non-elementary functions (I checked wolfram for this), so I decided to just go with the more elementary log(x)/log(4).
[ "Im a 30 year old man who's about to be published and finally graduate.. I've spent the last couple of years struggling.. AMA" ]
[ "math" ]
[ "8pzqe5" ]
[ 31 ]
[ "" ]
[ true ]
[ false ]
[ 0.73 ]
i decided to go back to school a couple of years ago because I wasnt happy with how I was living.. now I'm on the verge of being published.. I sucked up the nerve to get out of an abusive relationship.. and Ive never been more happy.. AmA
What topic was your thesis in?
Where did you do your degree?(if that's not a secret). Is it a three or four year degree? What did you do before college?
Not a very good AMA if you don't answer any questions!
What was your biggest struggle with publishing?
Do you know what AMA stands for?
[ "Where do the statistical tests, regression, and other tools originate from?" ]
[ "math" ]
[ "8pyp1d" ]
[ 4 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
[deleted]
Right, but where is the whole theory behind the t-distribution or F-distribution or any distribution? Why is it that a t-distribution is used anyways over the standard normal when you have a low simple size less than 25. How is this distribution derived in the first place? Same with the F-distribution used in ANOVA. Like why exactly is the F-value calculated the way it is for example.
Right, but where is the whole theory behind the t-distribution or F-distribution or any distribution? Why is it that a t-distribution is used anyways over the standard normal when you have a low simple size less than 25. How is this distribution derived in the first place? Same with the F-distribution used in ANOVA. Like why exactly is the F-value calculated the way it is for example.
Well, I don't know what ANOVA is, but the t-distribution is by definition the distribution of a normally distributed random variable diving by the square root of a chi-squared. You can arrive at it's pdf with some calculations, it's pretty straight forward. And when you have a 25 sample size, the convergence might not be fast enough for you to approximate the distribution of your estimator with a normal. It's pretty much the same with the t-distribution, it's the distribution of the quotient of two chi-squared random variables, and you can find out it's distribution in a pretty straight forward way as well.
Oh wow I didn't know it related to chi squared! Thanks!
Yes, and they come up a lot because usually estimators, or some multiples of it, for the variance of a (normal) population have chi-squared distributions. So, if you want to see how the variances of two different populations compare to each other, you end up with an F distribution, and if you want to test the expected value of a normal population without knowing it's variance you end up with a t-distribution. This was a very handwavy explanation, and someone please chime in if there's anything wrong, but I hope this helps :)
[ "[Challenge] Ok, lets try this again. Simplify impedance formula for an RL Parallel Circuit" ]
[ "math" ]
[ "8pyju7" ]
[ 0 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.36 ]
null
Does Z = XR, mean anything?
Hey! Yes, it means. Impedance squared= reactance times resistance, i believe. Not sure if it applies here or not. Ill work with it a bit.
If you draw out the triangle you can reduce your expression to that statement I believe. Rewrite the fraction you have by combining the terms as one fraction.
Hey! Thanks for the response. Im sure youre correct, im just having a hard time understanding what you mean. Here is an example of the triangle as it is with Voltage, Current, Power (which arent of concern with this particular equation) And impedance, Inductive reactance, and Resistance. It illustrates how Z is just a fraction of the hypotenuse. Do you mind elaborating on your simplification a bit please? Thanks again. https://imgur.com/a/tcSMK9b
Still there?
[ "For mathematicians, = does not mean equality" ]
[ "math" ]
[ "8pycvw" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.23 ]
null
This is satire, right? Please say yes.
This shit is some dumb shit.
It is expressive in ways that would be awkward to convey by a stricter type of notation, e.g., x + x sin(x) = x + O(x) or x + x + O(x) = x + O(x ).
I hope so
"Aha! Where is your Gauss now?!" lol I'm stealing that.
[ "Logic/Set Theory/Category Theory" ]
[ "math" ]
[ "8py9lo" ]
[ 9 ]
[ "" ]
[ true ]
[ false ]
[ 0.84 ]
null
https://www.reddit.com/r/math/wiki/faq#wiki_what_are_some_good_books_on_topic_x.3F
Because it is on the very bottom of the suggestions in the wiki: I highly recommend Emily Riehl's Categories in Context as a starting point for Category Theory. Very modern language, tons of examples and motivations. Also the pdf is free you can just google it.
try the simple questions thread. I'll be happy to answer you there.
For basic set theory, definitely read Naive Set Theory by Halmos. It's in my opinion one of the best Mathematics books written for any field. Written in a very conversational tone, and using more mathematical notation instead of pages of tedious logical notation. Following Naive Set Theory, you might want a more in depth book about Set Theory (because whilst Naive Set Theory is an excellent book, it's a very shallow depth). I like Elements of Set Theory by Enderton. For some more heavy going books, there Set Theory by Kunen, and Set theory by Jech. Both are much more in depth looks into Set Theory, and the subfields of set theory. However, both require more mathematical knowledge. Might only be worth a look after you've gone through the other Set Theory books and learned more about Logic. For Logic (and computability) I really like Computability and Logic by Boolos Burgess and Jeffrey Category Theory I'm not completely sure on, as I'm currently learning it myself, using Awodey's book. I can't say how that compares to other possible books.
Naive Set Theory is still useful to learn to introduce the reasons for axiomatic set theory. However this is besides the point, Naive in Halmos' book isn't referring to the standard concept of Naive Set Theory. In fact Halmos deals with ZFC axiomatic set theory. Naive is just referring to the fact that it doesn't go in depth with the logical details of axiomatic set theory (nor deals with more advanced topics). It instead is much more informal and introduces axiomatic set theory to people with some mathematical knowledge/interest but no previous experience in set theory.
[ "Squaring non-perfect squares without a calculator" ]
[ "math" ]
[ "8pxvwf" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.78 ]
Hey guys so I kinda just stumbled upon this while toying around with squares but please tell me what you guys think of this and if there is anything to add to it. (I am using square root of 54 as an example) 1.square root of 54 is inbetween square root 49 and square root 64 (7x7) and (8x8) 2.find the difference between the larger number and the non-perfect square. 64-54=10 3.find the difference between the larger number and the smaller number. In this case 64-49=15 4.Take the sum of step 3 and subtract it by the sum of step 2 15-10=5 6.Square root the smaller PS (49) which would equal 7 On a calculator square root 54= 7.34....
Nice! This is a pretty good piecewise-linear approximation of the square root (linear interpolation between the integer values). Here's a graph: https://www.desmos.com/calculator/mv9nimutqk Gray is the square root, green is OP's function.
A slight improvement if you can square halves, too: (a+1/2)² = a(a+1)+1/4. So, 7.5² = (7)(8)+1/4=56.25. That would make sqrt(54) about 7+0.5(5/7.25) or 7 10/29. I usually do (square root + discrepancy/(2sr)): 54 is 2.25 below 7.5² so sqrt(54) is about 2.25/15 or 3/20 below 7.5. That’s 7.35, which is good to 3sf.
No challenge really, I didn't learn how to take square roots to unlimited decimals so when I discovered this I should share it.
What method do you use for that? Babylonian? How do you varify how many digits are correct
The Nuns in 6th grade did not speak approvingly of Babylon. So I don't know. I just do it.
[ "Does the four color theorem works on a non euclidian surface?" ]
[ "math" ]
[ "8pxbbd" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.5 ]
Im thinking of another math tatoo. I'd like to have a mathematical map around my arm where the four color theorem doesn't work. Because a lot of people knows the four color theorem, it could lead to interresting discussion. I think it does work. I think i remember that it even work on a sphere.
Unfortunately, assuming your arms are cylindrical, there won't be any counterexamples since the case of a cylindrical surface can be reduced to that of a planar surface. However, if your arm is a Klein bottle, you're in luck: there are maps on Klein bottles which require as many as six distinct colors.
Unfortunately, my arm are stuck in 3 dimensions...
If you have a graph on a sphere, you can remove a point from the sphere, and you'll get a planar graph, thus the four color theorem applies here. You can embed K_7 (the complete graph on 7 vertices) on a torus, and this graph is clearly not 4-colorable. However all graphs on a torus are 7-colorable.
You just have to get a tattoo that changes over time. Can't be that hard, right?
A torus can require up to 7 colors, but it might be hard to find someone to tattoo your digestive tract. (More seriously, you could do a coloring for when you hold your arms together in front of you.)
[ "Hand-drawn fractal" ]
[ "math" ]
[ "8pwxir" ]
[ 204 ]
[ "Removed - see sidebar" ]
[ true ]
[ false ]
[ 0.95 ]
null
I'm scared.
Well, it fits on a sheet so it is no more than 2.
If you're wondering, this is no political symbol.
Try to guess the fractal dimension.
but it is still related to how many physical dimensions it occupies, and it can't be more than that number so this will have a dimension somewhere between 1 and 2
[ "What do Number theorists actually do for the NSA" ]
[ "math" ]
[ "8pyd9y" ]
[ 434 ]
[ "" ]
[ true ]
[ false ]
[ 0.94 ]
I know that the NSA employs a lot of mathematicians especially number theorists, and I know that number theory is very useful for encryption. But I guess I don’t know how you go from number theory, a pretty pure math topic to something so applicable like encryption.
Hahaha nice try Russia
I don’t think I could have imagined a better reply
this blog post has some info: https://www.math.columbia.edu/~woit/wordpress/?p=6243 The last few days have seen some new revelations about the NSA’s role in compromising NIST standard elliptic curve cryptography algorithms. Evidently this is an old story, going back to 2007, for details see Did NSA Put a Secret Backdoor in New Encryption Standard? from that period. One of the pieces of news from Snowden is that the answer to that question is yes (see here): Classified N.S.A. memos appear to confirm that the fatal weakness, discovered by two Microsoft cryptographers in 2007, was engineered by the agency. The N.S.A. wrote the standard and aggressively pushed it on the international group, privately calling the effort “a challenge in finesse.” The NIST has now, six years later, put out a Bulletin telling people not to use the compromised standard (known as Dual_EC_DRBG), and reopening for public comment draft publications that had already been reviewed last year. Speculation is that there are other ways in which NIST standard elliptic curve cryptography has been compromised by the NSA (see here for some details of the potential problems). The NSA for years has been pushing this kind of cryptography (see here), and it seems unlikely that either they or the NIST will make public the details of which elliptic curve algorithms have been compromised and how (presumably the NIST people don’t know the details but do know who at the NSA does). How the security community and US technology companies deal with this mess will be interesting to follow, good sources of information are blogs by Bruce Schneier and Matthew Green (the latter recently experienced a short-lived fit of idiocy by Johns Hopkins administrators). The mathematics being used here involves some very non-trivial number theory, and it’s an interesting question to ask how much more the NSA knows about this than the rest of the math community. Scott Aaronson has an excellent posting here about the theoretical computation complexity aspects, which he initially ended with advice from Bruce Schneier: “Trust the math.” He later updated the posting saying that after hearing from experts he had changed his mind a bit, and now realized there were more subtle ways in which the NSA could have made number-theoretic advances that could give them unexpected capabilities (beyond the back-doors inserted via the NIST). Evidently the NSA spends about $440 million/year on cryptography research, about twice the total amount spent by the NSF on all forms of mathematics research. How much they’re getting for their money, and how deeply involved the mathematics research community is are interesting questions. Charles Seife, who worked for the NSA when he was a math major at Princeton, has a recent piece in Slate that asks: Mathematicians, why are you not speaking out?. It asks questions that deserve a lot more attention from the math community than they have gotten so far. Knowledgeable comments about this are welcome, others and political rants are encouraged to find somewhere else. There’s a good piece on this at Slashdot…
Basically, encryption is about posing instances of a hard math problem that can't be solved easily. Number theory is big in that, because one of the canonical problems thought to be hard is integer factorization. What's 12 times 13 times 14? What three numbers multiply to 1001? Which is harder? I can elaborate more of you're interested, but a cryotographer working for the NSA might try to solve various hard problems more efficiently than before, or try to turn hard problems into useful ciphers, or try and abuse the ways in which encryption has to actually be done to circumvent the hard problem itself.
China does not need help.
[ "World's fastest new supercomputer- Summit, can do 200 quadrillion math calculations per second" ]
[ "math" ]
[ "8pwc2u" ]
[ 38 ]
[ "" ]
[ true ]
[ false ]
[ 0.78 ]
null
Wow, and my lecturers can't even do one an hour.
And yet Texas Instruments is still milking the same outdated 1980s hardware
Simulations?
I'm starting to feel out-dated
All of the above. Weather patterns, cryptography, astrophysics, protein folding, fluid dynamics - There are whole categories of problems that can't be solved exactly, only approximated with algorithms. The more computing power you throw at the problem, the faster you get results, the bigger the simulations you can run, the more accuracy you can achieve, etc.
[ "STEM Class Project: Build an Electric Guitar" ]
[ "math" ]
[ "8pwhz8" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.44 ]
I am in charge of a STEM (science technology engineering and mathematics) program, and for our year long project I want all of us to build an electric guitar from (relatively) scratch. Can anyone enlighten me on some good lecture topics for this? Plans to make one that can be modified for individuals ideas? I plan on giving 30 min presentations on the workings behind an electric guitar once a week, including the use of CAD programs. I also plan on a couple hours a week workshop for the actual building of the guitar after we’ve spent a while designing and calculating. Any resources you can think of to make this ambitious project a reality would be awesome!
Standing waves, harmonics, Fourier series are all very applicable to the physics of music. For electronics, RLC circuits and filters (ties back to Fourier). For design, Bézier curves could be interesting.
An excellent tutorial https://www.youtube.com/watch?v=TwIvUbOhcKE
Perfect, thank you! My kids are high school age, any idea on how to introduce Fourier series without getting glazed looks and losing them?
Maybe start by showing what various waveforms (square, triangle, sawtooth, ect.) sound lilike. Then show how by adding more and more sine waves together you can approximate the sound of the more complex wave forms. Then get visual and introduce the transform as the tool that tells you what frequencies you need to make these aproximation. Wikipedia has a great graphic showing the decomposition of a wave that might be useful here. Last, if you have already introduced voltage and AC, you can tie into the electronics portion of the cource by asking if they think this might work on other types of waves. All that said, you can probably teach this without Fourier analysis.
I’d potentially recommend starting off in terms they might understand from other contexts. For instance bass, treble, mid, etc. of a song. You could show an equalizer output and explain its measuring the energy of a frequency group(bass for instance) and that the summation of different frequencies comprise sounds heard in the song at a given moment. That’s a basic visual explanation. Also consider showing them a square wave construction using more terms of a Fourier series. There’s tons of animations for this. That representation might be useful in showing how high frequency signals are capable of generating visually linear(the rising and falling edges) or low frequency information can represent horizontals(think the flat top) and the higher frequency fills in the details. Additionally when I’ve explained Fourier synthesis I found connecting the construction of a waveform from others to the idea of elementary shapes used to construct more complex art useful.
[ "What math course/subject changed how you think the most?" ]
[ "math" ]
[ "8pvwp0" ]
[ 41 ]
[ "" ]
[ true ]
[ false ]
[ 0.96 ]
null
Category theory, hands down. It's just a remarkably convenient way to organize things and streamline reasoning. As a language, it has a really good "compresion ratio". If you do any sort of algebra or geometry, category theory might be a good thing to invest in.
Real Analysis. It is of course standard but it makes you challenge your intuitions.
My first course in abstract algebra was a big one. Even basic ideas that come up in a short unit on group theory, like - homomorphisms (structure-preserving maps) - isomorphisms (two collections are really the same collection w.r.t some structure) - quotients (reducing a collection of objects by declaring a bunch of them to be the same) turned out to be really universal ideas that were very powerful once I digested them.
Set Theoretic Forcing. Just the concept itself radically changed what I perceive to be "mathematical truth" in the first place. It also seemed so incredible that you could achieve such deep and powerful results through what on the surface feels like purely combinatorial trickery.
Algebraic topology showed me how interesting algebra is, and led me toward category theory which I am now strongly interested in.
[ "Hardest mathematics book?" ]
[ "math" ]
[ "8pvn63" ]
[ 8 ]
[ "" ]
[ true ]
[ false ]
[ 0.68 ]
What math book have you read that challenged you the most?
Algebraic Geometry, Hartshorne
No one can claim to have solved all the exercises, since some of them are problems which still (to my knowledge) remain open.
by Folland
In what way? Folland's one of the best books I've learned from. It's admittedly quite dense, but it's self-contained and the language is very clear. I found most exercises to be very doable. (Although I never did the last couple of chapters.) Rudin's book on real and complex analysis on the other hand I find to be nearly impenetrable.
I think it's more like for a class for people who know their undergraduate Analysis; I know the first PDE class I took as a grad student, which did not have any earlier PDE class (like an undergrad-level one) as a pre-requisite, used Evans.
[ "Plot of the birthday paradox" ]
[ "math" ]
[ "8pvl4b" ]
[ 118 ]
[ "Image Post" ]
[ true ]
[ false ]
[ 0.77 ]
null
Do you have a source for that formula? It doesn‘t look right to me. Isn‘t the correct expression 1 - (364/365)(363/365)...((366-x)/365), so that it = 1 for all x>365? Your formula approaches 1 only asymptotically, and assumes no correlations between pairs of people.
You've got the formula wrong (it's approximately correct for small x though). For example, you claim that p(4) = 1 - (364/365)^6 but in reality p(4) = 1 - (364/365)*(363/364)*(362/363)
So basically... besides the formula, the values, and the curve... its a nice graph.
It‘s a really good approximation: Desmos graph
If you want to take leap years in account, you should also consider the fact that being born on each day is no longer equally likely. Even if you assume that a birth date falls on each day with equal probability (which is very far from truth in the real world); there are only a little less than 1/4th of February 29ths than February 28ths.
[ "Base rate fallacy" ]
[ "math" ]
[ "8putyf" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.5 ]
Howdy . Long time listener, first time caller. An old professor of mine posted the following text: "Blacks make up 13% of the population in the USA...but 42% of wrongful convictions." EDIT: ...and went on to imply that there is likely a bias against blacks in wrongful convictions. My response is that this is the base rate fallacy. Without knowing the percentage of blacks who were convicted in the first place, we don't know what the 42% number means in terms of wrongful conviction. I'm engaged in a debate with someone who is convinced that this statement is enough alone to draw a conclusion (and who claims to teach stats.) Am I missing something? P.S. This is not a political post. Not that I'm worried...
There is no base rate fallacy here. He just cited two statistics. If you want to infer that black convicts are more likely to be wrongfully convicted than white convicts, then yes, the base rate fallacy. However, if the conclusion is wrong, it's because black people are simply more likely to be convicted of a crime at all, and it's not clear that this really undermines the claim.
Equal protection under the law means in part (to me, at least) that all law-abiding citizens are equally likely to be wrongly convicted of a crime. So the fact that blacks are several times more likely to be wrongly convicted of a crime than non-blacks (irrespective of the base conviction rate) implies to me a violation of the 14th amendment. Now you're committing the base-rate fallacy again: you've jumped from "people" to "law-abiding citizens", which is just the complement of the error OP is criticizing. It is a brute fact that, in the USA, 37% of prison inmates are black . If black people are a little more than a third of all convicts, it is not very surprising that they are also a little more than a third of wrongful convicts. That is sufficiently explained by a flat error rate that is roughly independent of race.
Equal protection under the law means in part (to me, at least) that all law-abiding citizens are equally likely to be wrongly convicted of a crime. So the fact that blacks are several times more likely to be wrongly convicted of a crime than non-blacks (irrespective of the base conviction rate) implies to me a violation of the 14th amendment. Now you're committing the base-rate fallacy again: you've jumped from "people" to "law-abiding citizens", which is just the complement of the error OP is criticizing. It is a brute fact that, in the USA, 37% of prison inmates are black . If black people are a little more than a third of all convicts, it is not very surprising that they are also a little more than a third of wrongful convicts. That is sufficiently explained by a flat error rate that is roughly independent of race.
Right, so he claimed there's a bias against blacks in convictions, not in wrongful convictions. This is incontrovertibly true: either they are convicted more than average, or disproportionately many black convictions are wrongful ( it's the former ). But that's a bias in the sense, not necessarily in the social sense, and not necessarily in the courtroom specifically. You'd need to do a lot more legwork to show that the court system is being unfair to black people, rather than that something else upstream leads to a disproportionate number of black criminals (eg poverty, social factors, disproportionate police attention, etc).
That's not what you wrote above.
[ "Set theory as an approximation of stronger logic?" ]
[ "math" ]
[ "8pulj5" ]
[ 16 ]
[ "" ]
[ true ]
[ false ]
[ 0.85 ]
Can you look at set theory as an attempt to approximate stronger logics in first-order logic? The fact that functions are sets and you can quantify over them feels like emulating higher-order logic. Axiom of infinity gives you a way to do infinite conjuctions and disjunctions as in some kind of infinitary logic. Axiom of choice is basically kinda allowing you to have infinitely many quantifiers. And when we do mathematics informally we often use those properties not really caring about the underlying set theory and simply use them as if we really were in a stronger logic system. So what I want to ask is: Am I speaking complete nonsense? Is there any formal way of treating this correspondence? Or at least some philosophical insights, can this be taken as a motivation behind constructing a set theory or does this follow from other motivations behind set theory? Are there any papers on the subject?
There is some truth to this, definitely. For example, set theory (e.g. ZFC) can interpret higher-order theories of simpler structures, such as second-order PA (the second-order structure of the natural numbers). It's very natural in some cases to want higher-order logic, and set theory lets us do this without actually needing higher-order logic, by interpreting it in a lower-order logic for a richer structure.
Second-order PA with second-order semantics doesn't "prove" things, because there's no complete system of deduction for working with it. It logically all true things about arithmetic, but there's no proof system that contains these entailments.
It's interesting that you mention the axiom of choice as sort of giving you infinitely many quantifiers because it's responsible for the fact that certain expressions involving infinitely many quantifiers are poorly behaved logically. Specifically, sentences involving quantifiers can be thought of in terms of games. The sentence 'for all x there exists a y such that x < y' can be thought of as a game where one player picks an x and the other player tries to pick a y greater than that. The fact that the 'exists' player has a winning strategy corresponds to the fact that the statement is true. Conversely a (finitary) sentence is false if the 'for all' player has a winning strategy. Using the axiom of choice it's possible to construct games of infinite length where neither player has a winning strategy, so this kind of infinitary logic fails to obey certain natural generalizations of tautologies in finitary logic, in particular the one discussed here .
The issue is that semantic entailment—being true in all models of —isn't equivalent for second-order logic to syntactic entailment—having a proof from . (Assuming you have an effective notion of proof.) The set of arithmetic theorems of second-order PA is in fact computably enumerable, so cannot be TA, even though its unique model satisfies TA.
It should be clarified that AC lets you construct a nondetermined game . If you look at games played on arbitrary sets, then there's always a nondetermined game, even if AC fails. This is because AC is equivalent over ZF to clopen determinacy for games played on arbitrary sets. The forward direction is given by Zermelo's theorem and the backward direction comes by considering the two turn game where I plays a nonempty subset of a set and II plays an element of , winning iff her play is in the set I played. I clearly can't have a winning strategy, so if the game is determined then II has a winning strategy, which gives a choice function on .
[ "So sweet and generous, now we know his PIN code! Wait what?" ]
[ "math" ]
[ "8puar7" ]
[ 0 ]
[ "Image Post" ]
[ true ]
[ false ]
[ 0.47 ]
null
Jokes on her. I’d just press credit.
In Canada, our credit cards have PINs as well...
Mathematica/Wolfram is the way to go.
Seems like the only fault in you country. Must be nice 😭
Don't mind me while I just google... Integral calculator...
[ "How many numbers do you need for infinite variation without pattern, and when do odds become impossible?" ]
[ "math" ]
[ "8puhp3" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.38 ]
[deleted]
Isn’t it Googol?
You need a coconut gun for that.
2. Use primes. Take the last digit of each prime number (in binary) and add it to the "non-pattern". So you just need 2. And ta-da no pattern
Actually, I think there a pattern to the last digit of prime numbers in binary.....
Oh yeah ha ha. Maybe in base 10 or something, although you'd stop getting even numbers or 5.
[ "Good day/evening. Are any of you aware of a book that contains a lot of geometric construction problems?" ]
[ "math" ]
[ "8pueu8" ]
[ 3 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
[deleted]
If you are starting out try Kisilev’s , or maybe Hadamard’s . Next maybe try Coxeter & Greitzer, . Polya’s book has some nice construction problems (among other types). After you are reasonably comfortable try Yaglom’s books about (in English, 4 books). If you want a challenge, Evan Chen’s book has a bunch of tough problems. The book /u/muppettree mentioned, Akopyan’s ( free pdf of first edition ), is a whole bunch of pictures demonstrating various geometrical facts. If you can manage to go through some significant portion of the book and (a) figure out what each picture is saying, and (b) prove that it is true, you will learn a whole lot. Not sure how easy that actually is in practice though; a book including text as well might be more helpful.
I don't have actually any book in mind except that, but there is an android game I know "Euclidea" and it's very tough
Maybe Akopyan, Geometry in Figures. Contains all kinds of Euclidean geometry problems.
Euclid's Elements
Thank you very much
[ "What are some good books articles for getting you inspired?" ]
[ "math" ]
[ "8ptpl8" ]
[ 10 ]
[ "" ]
[ true ]
[ false ]
[ 0.92 ]
Might sound like an odd/stupid/immature question, but this is something I have had on my mind for a while. Obviously, discipline beats motivation every time, hands down. There is no substitute if you need to get work done to just get yourself into the habit of sitting down at a certain time and just doing it. However, it’s always much more enjoyable to work/study/research/revise if you’re feeling motivated. I have found in the past that there are some books and articles that just have this incredible ability to remind me about how amazing and “beautiful” maths is, and so make me really feel like that’s all I want to do for the next few hours. Among these books are , by Edward Frenkel, , by Greg Egan (and tbh anything else by him), as well as a few others such as the first hundred translated pages of Récoltes et Semailles. Weirdly, some non-mathematical books such as , by Iain M. Banks have this effect, too. One specific article that I like (partly because every time I go back to it I ever so slightly understand the vocabulary more) is , which I think is in one of the AMS notices. What articles/books/other (e.g the FLT documentary) have this sort of effect on you? What is your opinion on this sort of thing in general? These books can be at any level - I am about to do a masters course next year, but stuff aimed way above or below that is absolutely fine; this is for anyone.
A Mathematician’s Apology by Hardy is quite possibly my favorite text I’ve ever read and it’s short too
I feel like a lot of his life is just unintentionally funny. Unsure how true it is, but I heard he used to claim to be not religious, yet continually talked about how god hated him
Reading about the history of the greats inspires me for some reason. I guess that I look up to them and learning to think like them naturally makes me ambitous and curious (since they certainly were).
That reminds me of Erdós (I know it needs two slashes but I don’t know how to do that) calling God the “supreme fascist”. I guess another book maybe to add to the list (though others like it more than me) is , by Paul Hoffman, which is a biography of Erdós.
Take it with a pinch of salt. He was a great mathematician and maybe his anecdotal evidence agrees with his views but there's a few things which are in general consensus wrong. For example "mathematics is a young man's game". We have had a lot of great, young mathematicians and all big ones did great stuff young like Galois, Euler, Gauss etc. this day and age even though you may peak mathematically in your 20s or 30s, you can still be at the top level whilst old now.
[ "Could a product integral be classed as a binary operation?" ]
[ "math" ]
[ "8ptmut" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.99 ]
Say we have a infinite set of functions f_{n}(x). Could we say \int_{0} ^ {t}{f_i(x)f_j(x)}dx is a binary operation? I have just started studying groups and im still in the learning stages of it. I understand a binary operation is a operation between 2 elements a,b to get a o b. Just trying to wrap my head round it, since i have studied inner product spaces and i remember integration being a inner product. Thanks!
Could we say \int_{0} ^ {x}{f_i(x)f_j(x)}dx is a binary operation? Your integral doesn't make sense (x cannot be both the variable of integration and also an upper limit for the integral). Assuming you mean e.g. the usual L inner product on C[a,b], this is not a binary operation. The reason is that a binary operation takes as input two elements of a set and outputs an element (it on two elements to produce a third element). An inner product takes as input two vectors and outputs a real number.
That is not to say that interesting binary operations on sets of functions in this context don't exist: for example, convolution is an integral transform that is a binary operation on certain sets of functions.
The output of the operation is a function of t, right? Then yes, as long as you restrict to e.g. continuous functions so that the integral always exists. Exercise: does this form a group?
Sure; like any other abuse of notation, you are free to do it if you understand what you are doing.
Fixed the integral haha Thanks for the speedy reply! I was thinking if we had a infinite set of say 1/ax where a and b are all real numbers so when we integrate we get the same form that would be in the set. Although the upper limit has now destroyed that so i see what you are saying :)
[ "In Dirichlet Functions, how can we say that after the \"number just after\" an irrational number will be a rational number?" ]
[ "math" ]
[ "8ptkj4" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.38 ]
[deleted]
There is no such thing as a "number just after an irrational number". Rather, the indicator function on Q is discontinuous because any neighbourhood of a rational number contains an irrational number and vice versa.
Between any two real numbers, there is a rational. Let a,b be positive reals with a < b. By the Archimedean property there exists a natural N > 1/(b-a) so that 1/N < b - a. Let p be the largest natural so that (p-1)/N < a. (We may do so since the set of naturals m so that (m-1)/N < a is bounded above, and the naturals are well ordered, and m=1 shows the set is nonempty). Then p/N >= a, but also p/N = (p-1)/N + 1/N < a + b - a = b. In other words, p/N is a rational in [a, b). Can you think of how the other proof might go?
Yes. This is done is any real analysis book.
Because both the rational numbers and the irrational numbers are dense in R.
I in my case, the tone sounded very entitled and demanding, especially in an age when you can easily Google something like "density in real analysis" or even "real analysis book recommendations" yourself. You could probably even search just this subreddit to find multiple good threads explaining this and/or providing references.
[ "Infinite sines" ]
[ "math" ]
[ "8pthrm" ]
[ 5 ]
[ "" ]
[ true ]
[ false ]
[ 0.77 ]
[deleted]
You have to show that the sequence converges. You can say that it converges, then it converges to 0, since 0 is the only solution to y = sin(y). As an example, let f(x) = 2x. See what happens when you consider f(x), f(f(x)), f(f(f(x))),....
There are two questions to ask here. First, does the function have a fixpoint? That is, does there exist a y such that y = f(y)? If not, then your sequence x, f(x), f(f(x)), ... cannot converge to anything, since if it did converge, it would have to converge to a fixpoint of the function. Second, your fixpoint MIGHT be a limit, but it isn't, necessarily. So you still need to prove convergence. To see how this can go wrong, consider the function f(x) = 2/x. The square root of 2 is a fixpoint of this function, but the only way the iterated sequence converges is if you start at the square root of two already! (Note that f(f(x)) = x, so you cannot get convergence otherwise.)
ln and log will not be defined yes. It keeps moving the graph to the right.
Before you can say that y = sin sin ... sin x, you must prove that y exists. Let y(n+1) = sin y_(n); y(0) = x . If x is inside (0; pi/2), then sin(x) < x and sin(x) > 0. So y(k) is inside (0; pi/2) for some k, then y(n) is monotone and limited, so there exists a limit of y(n). After this your arguments become correct.(for x inside (-pi/2; 0), the proof is similar; for x = 0, proof is trivial; and for other x: -1 <= sin(x) <= 1).
An easy way to prove this is to look at the image of sin sin ... x for different numbers of sines. For one sine, we get [-1, 1]. For two, note that sin is odd and monotone on [-1, 1], so we get [- sin 1, sin 1]. Keep going; what does sin sin ... sin 1 approach?
[ "Square problem" ]
[ "math" ]
[ "8pt9zq" ]
[ 3 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
I recently just got into math and I've been stuck on a problem for a while now. It's a problem I made up myself and I don't even know if there's a solution at all. The problem takes place in a Cartesian coordinate system. It's about finding coordinates of a shape with a given angle. Say you have a circle centred at the origin with a radius of one if I give a random angle the intersection between the angle and the circle is (cos(θ),sin(θ)) easy stuff right? Clearer explanation: That's not the main problem. The problem is doing the same with a square. Explanation: If I wanted to find the coördinate of just one line that would be easy peasy, I would just use a system of equations and boom. Can anyone think of a formula for the square? Thanks in advance. - Van Beveren
Suppose the angle is less than π/4, so the point will land on the right side of the square. The point lies both on the parametrized line (t cos(θ), t sin(θ)) and also on { x = 1 }. Therefore, t cos(θ) = 1, so t = 1/cos(θ). Therefore, the function goes f(θ) = (1, tan(θ)) for θ in [0, π/4]. All the other sides are the same, but you'll have to define the expression piecewise because it's not differentiable at the corners.
Wouldn't that approach (1,1)?
Wouldn't that approach (1,1)?
Here's my solution, I'm a novice so sorry if it's wrong. let a be the angle mod 360 if a in (315,360]U[0,45]: F = (1,1/(tan(90-a))) If a in (45,135]: F = (1/(tan(a)),1) if a in (135,225] F = (1,-1/(tan(90-a))) If a in (225,315] F = (-1/tan(a),-1)
Here is my solution in polar coordinates (Cartesian coordinates case is already answered). First we draw a line from (0,1) to (1,1) which is r(t)=1/cos(t) where t goes from 0 to pi/4. Then the line from (1,1) to (1,0) is r(t)=1/sin(t) where t goes from pi/4 to pi/2. To complete our square we need to repeat this procedure: make the function periodic. The quick and dirty trick to make any function periodic is to replace its input with the modulo operator, in our case replace all t with t mod pi/2. Here is the final result. The nice thing about polar coordinates is that we can get this interesting "unwound" square by plotting the same function in Cartesian coordinates .
[ "What did you study that later on became useless?" ]
[ "math" ]
[ "8psqxj" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.32 ]
[deleted]
Wait, are you saying that not all mathematicians are working tirelessly on the P vs. NP problem? Next you'll be claiming that some aren't working on Collatz!
How do you approach P=NP ? You don't.
There are very few basic concepts learned in undergraduate and graduate that are NOT used in everyday research. Research is not just about using about using fancy theorems to prove new fancy theorems. It’s also about using tools, plain or fancy, to build new ones that provide the insight and power to develop new mathematics or understand existing math more deeply.
I use Calculus 3, Linear Algebra, and Differential Equations on a daily basis for my job and it's not even a role as a "statistician" or "mathematician". But if you don't want a 6 figure salary in your first year or two out of college than ignore me and don't bother. It's not like Finance, Engineering, or Data Science are worth your time anyways.
I use Calculus 3, Linear Algebra, and Differential Equations on a daily basis for my job and it's not even a role as a "statistician" or "mathematician". But if you don't want a 6 figure salary in your first year or two out of college than ignore me and don't bother. It's not like Finance, Engineering, or Data Science are worth your time anyways.
[ "I made a Boolean game. Hopefully the CS community might appreciate it." ]
[ "math" ]
[ "8pstc7" ]
[ 497 ]
[ "" ]
[ true ]
[ false ]
[ 0.94 ]
null
This is a cool game. Consider requiring the user to "submit" or finalize their answer. That way they have to think a bit about whether their answer is complete, rather than being immediately told when it is. Maybe have a mode one can do without the timer. Some people might want to continue further, but get frustrated that they aren't getting it fast enough. I thought the SMW sound effects were fun, but it might be a good idea to use open source sound effects.
Loved it. Is it possible to turn off timer? This would make it easier for newbies or older people to enjoy! Very nice!
My laptop has a touch screen and the difference between the trackpad and the touchscreen makes a big difference. I can see this being a great mobile game.
Kind of an odd choice of title in that case :/
Even for us young guys, it gets to the point that I know what to click on (e.g. orange 2) but can't find it and click on it with a trackpad faster than the timer. Edit: mouse makes huge difference.
[ "Can anyone do this?" ]
[ "math" ]
[ "8ps3q7" ]
[ 0 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.05 ]
null
From the sidebar: Homework problems, practice problems, and similar questions should be directed to /r/learnmath , /r/homeworkhelp or /r/cheatatmathhomework . Do not ask or answer this type of question in /r/math .
lol is this the irish junior cert??
Yes
Thanks this means I got part 1 right and part 2 wrong
Thanks this means I got part 1 right and part 2 wrong
[ "How did Tadashi Tokieda end up where he is?" ]
[ "math" ]
[ "8prgz4" ]
[ 103 ]
[ "" ]
[ true ]
[ false ]
[ 0.9 ]
I understand the rough overview of his life... he started as a painter, studied philology (and, in turn, many different languages.) Eventually he learned English, got his PhD at Princeton and taught at Cambridge. Now he teaches at Stanford. But my questions are: does someone somehow know he chose to pursue math and he was able to learn it so quickly? And why these myriad interesting careers before?
It just amazes me how many talents this man has. He is also a gifted teacher. His topology geometry lecturers on YouTube are so enjoyable. Edit: Lecturers on topology and geometry given at the AIM, South Africa
Why do you think he learned math "so quickly"? He got an undergraduate degree in math, and it does not seem remarkable to get an undergraduate degree and then graduate degree in a subject. I think the most remarkable thing withen math is being able to good research.
Why do you think he learned math "so quickly"? He got an undergraduate degree in math, and it does not seem remarkable to get an undergraduate degree and then graduate degree in a subject. I think the most remarkable thing withen math is being able to good research.
Here's one from Numberphile. Not exactly an academe-level of lecture but it's interesting nonetheless.
How about a link to one?
[ "How should I review over the summer?" ]
[ "math" ]
[ "8prbi7" ]
[ 9 ]
[ "" ]
[ true ]
[ false ]
[ 0.8 ]
I am an undergraduate math student in my senior year. Last semester I had the pleasure of taking a professor who really cared about teaching us well. He would never let "BS" anything in class and we had to be very thorough with our explanations and proofs. Taking his class made me realize that many of the other professors were quite lenient and had low standards, so I feel like I got through most of my classes through rote learning and I have forgotten a lot of material. One thing I tried doing was working through Khan Academy all the way from the bottom to pinpoint which areas I was weak at. I'm not sure if this is a good use of my time since so many elementary skills were constantly being applied and reviewed through so many of my classes. I'm looking for resources or anything that can help me review for next semester.
I wrote up a list of 35 things students can do over the summer to level up their math abilities. (My blog is ad-free so I don't make money off of clicks.) For you I suggest checking out and attempting an IBL course, like this one in analysis . It sounds like the kind of thing you'd like: the text gives you only the definitions and statements of results and your job is to prove and work through all of them step by step.
Khan academy would probably be a waste of time if you can prove where each result comes from with just a slight jog of memory.
Try solving all the problems in Spivak's Calculus on Manifolds.
Thank you for this great list! I will work through them after my finals + a break.
Good luck! Feel free to message me while you're working through stuff.
[ "Shift Operators? Symbolic Dynamics?" ]
[ "math" ]
[ "8pr425" ]
[ 5 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
I spoke to people at my university about something I made up and they described it as a Shift operator, I thought I'd share it here to see if anyone else would like to share. I call it the Right Left Operator. Start with some sequence, I'll demonstrate with PI 3.1415926535897932384626... Start at the beginning, "3" and you move right 3, that takes you to "1", then you move to the left by 1, and you keep doing this. One of three things will happen, in the case of PI you will return to the first 5 and have to exceed the sequence bring the operation to a halt. In the case of √17 you will be caught in a loop. Or finally in the case of 9393939.... you will go on indefinitely. Some of the interesting things I have noticed is that sequences similar to 939393... are preserved after the operation (it's still 939393939...). And that I discovered 2√17 loops as well. 17 is special because it is the first natural number to not halt the operation. I wonder if all multiples of √17 will loop. I am only just starting to look into this, but I thought someone else might be interested in this, or have enlightening information. Personally, I'd like to find a nontrivial number that the left right operator can operate indefinitely and hits every single digit, or a nontrivial preserving sequence. I could probably code a program that could fabricate a number like that, but I wanna know if you could write them as some closed fraction like 1/√2 or something. Or if they have any other cool properties!
I'm not sure about the indefinite number situation, but your description of the operator is one that is frequently used in CS and programming. Obviously these work in base 2 in programming, and there is a left shift, which when used like this: x << a shifts x to the left a bits. There is a right shift that does the opposite. So application to your goal: One way to think of the shift operator is as an multiplication by an exponentiation. So if you shift the number x to the left a digits, you can think of it as being x * 10 and right shift of x by a digits as floor(x / 10 ). You might be able to use these properties to construct a number with the properties you would like, but I'm not sure about any ways to do it without a program to solve it.
Well, yes, it's a shift operator, denoted by σ, in the Σ+ space (space of the seminfinite words with 10 digits) the 93939393... example is an example of a periodic point of period 2. Not sure why did you define an halt in the second 5 of pi. Was it for computational reasons? Repeating patterns are usually describe with a bar over the smaller unit that is reapeated like a bar over 93 to denote that number.
First:What do you define as trivial? The Number 2.31313131... hits every digit. I would think about the problem like this: If we look at your example 9.39393: It works because they’re two types of odd digits where the left one is bigger than the right one (9>3). We can make 14 more examples of this kind if you take an odd digit and an odd digit that is smaller than the first one chosen and write them repeatedly. Especially 2.313131 hits every number A thing you might can try to do is “to chop numbers up” into smaller pieces (lets call them rythms), where: There is an even number of digits in each rythm and the first number is sending you to the right and the last number of the rythm is sending you to the left. Example: ...313131... starting with the first 3. It goes to one and this goes back to another 3. Therefore the rythm would be like this: ...axbacb.... Where a,b are rythms and x the ending of a rythm and c the beginning of a rythm. Remark: a number needs to consist of infinitely many rythms that are linked together from both sides but a number can have an intro, a rythm that just starts a sequence. Ex.:2 would be the intro of 2.313131.... So we could get a new notation for a rythm: +$==$- where + goes to - and vice versa $ could be a starting or an end point of a rythm = could be a left out number This rytm would be: 5$ab$1 a,b some digit Q’s you could ask: How many rythms are there(base-10 or different)? How many ways to link them? ... It seems obvious that there finitely many rythms and thus finitely many ways to link them together... But there could still be a number that jams rythms in infintely many new ways together.
You can interpret your number, eg pi, as a sequence of integers 0-9, and furthermore, a graph with indices as nodes that's traversed starting from the first index. Each node has an outdegree of 1 (or 0 if you terminate when sent to index that's out of bounds), and an indegree between 0-19. Additionally, if two nodes are adjacent, their indices can be at most 9 apart. Using these properties, we can identify sequences that will hit every number. First, we can't have a cycle, so there can't be zeros. We can generalize the problem by asking for chunks of elements, where every chunk, and every element in each chunk is hit. If a chunk has an even number of elements, then it can be repeated so long as it sends to the adjacent chunk the same location it started in. An example is the sequence 231313131.... If a chunk has an odd number of elements, we can make it have an even number of elements by chunking into 2s. Lastly, chunks can be composed.
Applying the right left shift operator on pi, 3.14159265358979, produces 3 1 4 2 5 3 2 5 1 5 and then becomes halted because you can't go left 5 times when only 4 numbers are present. Not for computational reasons.
[ "Further Study?" ]
[ "math" ]
[ "8pqap5" ]
[ 1 ]
[ "Removed - see Career &amp; Education Questions thread on front page" ]
[ true ]
[ false ]
[ 0.67 ]
null
It is possible; like most fields it can be done, not by me tho. I've always needed academic guidance. A professional environment with rigorous set tasks and leadership to check my progress and understanding would be crucial.
I agree, this does seem possible.
Hmm, maybe you could try? I’m not saying you’ll definitely be able to do it, but if it works out you would’ve gained a really valuable set of skills - to be able to learn on your own. If the books are really good, the explanations might serve as better guidance than even lectures. Of course there’s benefits to going the traditional way too. I’d imagine the extra credentials from a bachelors degree could only help in terms of career. It’s up to you in the end I guess haha.
I have already attempted online course and I got quite far. But there's only so far you can go without the proper guidance.
I can provide further guidance, but only in so far as recommending a path to take, as in which books to use, in which order, and which subjects depend on which. If you feel that won’t be enough, then I guess the only way is an official bachelors program.
[ "College grad that wants to learn more but has a hard time doing so." ]
[ "math" ]
[ "8ppxf3" ]
[ 4 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
[deleted]
Learning is hard. Knowing is rewarding. On a similar note by Minnesota novelist Frank Norris, Don’t like to write, but like having written. Hate the effort of driving pen from line to line, work only three hours a day, but work every day.
I had (have) a habit of typing up all my assignments and archiving them on my PC. Whenever I feel like I need to brush up on a subject, I look back at those assignments because I always find my own words easier to understand than external texts. On top of that, a good exercise is to refine your old assignments and write everything up into a coherent summary of the course/semester/degree. While doing this, you'll also see how dumb you were and appreciate how smart you've gotten. Finally, you can extend this refining process by rewriting everything in higher level language and using stronger tools. Eventually, the question of "How clean can I make this?" should motivate graduate level research.
If you're genuinely interested in mathematics, find a topic you like and research the fuck out of it. For example, for me the topic that i consider my passion is Integral Calculus. You may wonder how complicated it could get just some dumb integral; well, it gets extremely complicated. From transcendental functions to approximations to trigonometry, etc. The point is, once you find a topic you like you're almost forced to get good at mathematics in general. And the actual topic doesn't really matter, the important thing is to solve hard problems in that area because then you can just use the same techniques you learned in other areas of mathematics. Some resources i use are brilliant.org https://artofproblemsolving.com/community/c7_college_math https://math.stackexchange.com/ https://www.youtube.com/user/numberphile (this pumps me up to learn more math) You'll be surprised the quality of books that people can recommend you on those sites, this one included. PS: There are really hard books on several areas of mathematics, and I don't think anybody could ever solve all exercises from all books in a particular area. So that means you'll always most likely learn something new by reading books about topics you might already know. I understand how you feel when you can't find challenging exercises on a book, but don't worry because in my opinion that feeling is common. It's really about specializing at a topic what'll get you high on maths knowledge. Hope this helps.
I think the complaint is fair. Graduate texts are generally pretty dry and are written as references instead of as educational material. Most people read graduate texts as part of a class, where they can discuss it and ask questions about it. The worst part is that many of them lack solutions to their problems, which makes self-studying much harder than taking an already-hard graduate-level class.
I think the complaint is fair. Graduate texts are generally pretty dry and are written as references instead of as educational material. Most people read graduate texts as part of a class, where they can discuss it and ask questions about it. The worst part is that many of them lack solutions to their problems, which makes self-studying much harder than taking an already-hard graduate-level class.
[ "Would the use of colors to graph a 4-dimensional space be useful?" ]
[ "math" ]
[ "8ppeut" ]
[ 4 ]
[ "" ]
[ true ]
[ false ]
[ 0.83 ]
I was reading a bit about machine learning, and the correlations between each input parameter. It is easily visualizable when comparing 2 or 3 at once with 2D or 3D graphs. But there are cases where the input space is n-dimensional, which makes things a lot less intuitive (separating two spheres in 3D you can wrap your head around, separating hyperspheres in an n-dimensional space becames way too abstract). As you guys probably know, we can't really understand 4th dimensional space and visualize it (at least I can't). I thought about adding another parameter to caracterize the point in space (other then [x,y,z]) and thought about the usage of color (or a gradient black to white [0,1]), it would not be as accurate and easy to wrap your mind around it then numbers on a line, but you would be able to see how that 4th dimensional space behaves better. At least how does that 4th dimensional parameter varies in relation of the other parameters. After searching a bit I don't seem to find an example of someone doing this, would it work?
Yes. Example: https://www.youtube.com/watch?v=T647CGsuOVU
Exactly! Also 3Blue1Brown edit: colours are used all over complex analysis, to represent ℂ→ℂ functions as graphs in ℂ
Maybe you can give a specific example of what you wanted to plot? (Or even better, an example plot?) There are many, many examples of the use of color (also size, shape, orientation, texture, symbols, ...) to add additional information dimensions to various kinds of information graphics. I recommend Imhof’s , Bertin’s , and all of Tufte’s books. /r/math may not be the best place to discuss information design.
This is basically what they do to represent phase in the complex plane. Also heat maps and contour plots.
try it and see yourself. for 4 dimensions white to black should be good but I guess more than 8 dimensions is to complicated to be interpreted by your mind if only visualized with colors. Or could "embedded visualisation" from TensorFlow be useful for you? http://ahogrammer.com/2016/12/01/tensorboard-embedding-visualization/ https://ai.googleblog.com/2016/12/open-sourcing-embedding-projector-tool.html?m=1
[ "I did some bad math, made my day longer but along with my Math teacher still could not figure out where I went wrong." ]
[ "math" ]
[ "8poups" ]
[ 0 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.5 ]
null
It looks like you either didn't substitute the v = sqrt(5-u), or you forgot the v in -2vdv = du. Either way, you're missing a v.
You might want to get rid of that habit of using the symbol for logical implications (->) for the words "is equal to" or "leads to." Look up /leadsto in latex.
Trying to teach a calculus student proper form? Godspeed
Use Beta function!!
The first integral from sqrt(3) to sqrt(5) is missing a v, before substitution you have the u, the sqrt(5-u), and the du, but it looks like only the first and third made it through.
[ "Product over vector sum simplification" ]
[ "math" ]
[ "8poq0a" ]
[ 0 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.11 ]
null
There's no context for what you're doing. If I have any idea, the bottom is the magnitude of the sum of your vectors, but then the top is nonsensical. Also this sub isn't for this.
You can't multiply vectors. You're not giving any context for the quantities you're dealing with. Are they vectors in R ? Complex numbers? Numbers in a finite field? Expectation values? There's subs for math homework help and this isn't it. Try searching even the tiniest bit or read the sidebar. If you mean to take the dot product on the top and you're summing the square of the magnitudes on the bottom, then I'd imagine there's some simplification you can do.
You need to use correct notation and explain what you're doing more clearly. To simplify it, try using an identity for the dot product that involves the angle between them and see if it helps.
Then the answer is zero. The dot product is zero when vectors are perpendicular to each other. You can see this clearly from v*w=|v||w|cos(theta)
Hmmmm.... So i would then sub 0 for R x Xr?
[ "Convert sequence of numbers into a approximate formula." ]
[ "math" ]
[ "8pn5i5" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.6 ]
I have a couple of sequences of numbers and I want to get the formulas that would most closely recreate that sequence of numbers. Are there any tools I can use for this? Or how would I go about getting this formula. example of a sequence of numbers I want to get the approximate formula of: 1200, 1380, 1587, 1825, 2099, 2414, 2776, 3129
The word "interpolation" is for finding formulas that exactly match your data points, the word "fitting" is used for finding approximations. Most of the time people want fits.
http://www.wolframalpha.com/input/?i=fit+quadratic+%7B1200,+1380,+1587,+1825,+2099,+2414,+2776,+3129%7D
Glad to help!
Check out OEIS, the Online Encyclopedia of Integer Sequences . edit: unless you want to understand how you could come up with a formula yourself, in which case I think you should be posting on r/learnmath . edit2: OEIS won't be very useful for textbook exercises, it'd have to be something relevant, not just any random sequence. Try Wolfram Alpha instead. (type for instance "interpolate 1200, 1380, 1587, 1825, 2099, 2414, 2776, 3129")
Thank you so much, that's perfect!
[ "Stupid question but, why weaken your conclusion to a∃xP(x) when you could just leave it as P(c)?" ]
[ "math" ]
[ "8pm4qs" ]
[ 33 ]
[ "" ]
[ true ]
[ false ]
[ 0.88 ]
In proving a theorem, you conclude that some statement P holds true of a particularobject c. And then you weaken it by saying there exists some x such that P holds true of x. Why do that? Why do existential introduction? Surely it weakens what you says, like disjunction introduction. (From A, conclude A or B) I have no problems with existential elimination. One can clearly see "If there exists x such that P(x) then..." leads to a stronger theorem than "If P(c) then...". Weaker conditions mean stronger theorem.
When the particular solution constructed in the proof is of any interest, we don't, eg Chinese Remainder Theorem. When it's just some random impractical solution out of potentially many, eg Bolzano-Weierstrass by repeatedly partitioning the interval, why bother keeping it around?
Sometimes the c is completely meaningless in the context of the problem. For example, consider the following proof that . For me, a set S is finite if there exists a natural number n and a bijection S -> {1, ..., n}. By assumption, S is finite, so there is some n for which there is a bijection f:S -> {1, ..., n}. Now consider the function g: S ∪ {x} -> {1, ..., n+1} defined by g(s) = f(s) if s is in S, and g(x) = n+1. This is a bijection, and so S ∪ {x} is finite. You can see at the end that I went from P(c) to ∃x P(x). (More precisely, I went from "g is a bijection with image {1, ..., n+1}" to "there is some natural number m and some function g such that g: S ∪ {x} -> {1, ..., n+1}".) The reason I did this is because the function g is completely irrelevant to the problem. I started with some random function f, and I ended up with some random function g. Why would I want to carry that around with me wherever I go? Essentially the reason why this is happening is because we had an existential qualifier in the hypothesis of our proposition, so when we did existential elimination we just got some random function f. We don't know anything about f (except that it's a bijection), so when we build g from f, why does g become so important that we should remember it? It doesn't, so at the very end once we've gotten everything we need out of g, we "throw it away" and go back to the existential for our conclusion.
Perhaps if c is difficult to describe? If it takes a paragraph to define c, then ∃x P(x) is neater than P(c) as the statement of the theorem.
Can you give an example?
That's also a great example because if x is in S or x in not in S, then S union {x} will be in bijection with different numbers of elements. So either P(c) or P(d) and so it makes sense to weaken both to "∃x P(x)"
[ "Have your mathematics studies made you more of a literal thinker?" ]
[ "math" ]
[ "8plwby" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.3 ]
There are many times in which I'm chatting with a friend, and our different communication styles are apparent. For example, I just had the following chat with a friend through text: I just did some quick googling and found this article: . I feel that I've grown into a literal thinker after all my undergraduate mathematics courses. In math, I've learned to be precise and articulate with my words. In my first internship, my mentor mentioned that, yes, we do employ this style of thinking and rationale while at work, but it's wise to turn it off at home with the our significant others. I find it difficult to turn it off. I'm the only one amongst my friends whose studies are in mathematics and computer science. Most of my friends are in finance, accounting, or medical. Does this resonate with anyone else? If so, how do you manage it? Do you have these mini-clashes with people who are non-literal thinkers? What are ways I can try to adjust so I have less of these mini clashes? : I personally felt a little offended when my friend thought I lacked emotional intelligence to understand him. My non-literal thinking friends often think I'm deliberately being difficult and dense. : I think I may also respond differently through text messaging compared to an organic, face-to-face, verbal conversation. Text messaging has a focus on the literal words, because that's the only thing communicated. Normal, face-to-face communication incorporates so much more than just your word choice. There's tone, body language, and pacing. All that is lost in text messaging. With texts, they're just words on a screen that I can read, reread, and ruminate on. I can dwell on the physical words and letters. I have the time and space to spot spelling and grammatical errors. All this is different than the context of a face-to-face conversation. :
A bit tangential to the topic, but I feel like your example doesn’t illustrate your point, because saying six-pack clearly means “visible abs”, so your response was more obtuse than literal. More on point, I think you’re right, but this more logical and literal mindset can easily be balanced by reading more literature imo. With writers like Pynchon or Joyce, you have to focus more on subtext, implication, and intent rather than the words themselves to get the full meaning. So I think if you develop both sides of thinking concurrently, they should “balance out.”
You write like you to end up on /r/iamverysmart . I suspect that ego is the core issue, not math.
I've noticed the trait in myself, but I've been trying to work on it. It's a reflex of looking for a counterexample for every statement even if it seems true. Great in proving, terrible in conversation. It's a human conversation, not a watertight proof, so you save everyone time by addressing what was rather than what was . Shoulder the burden of disambiguating what the other person said, by addressing the most interesting facet. Conversation becomes more efficient and more interesting. Sure, sometimes such an astute semantic observation is genuinely interesting, in which case absolutely say it, but more often you waste ten seconds of everyone's time.
To me, having muscle mass and having muscle tone are two completely different things. Having a six pack comes from most importantly (1) low body fat, and also (2) hypertrophy of the muscles, e.g. resulting from strength training. If your muscles are strong but not visible, that’s not a “six pack”. https://en.wikipedia.org/wiki/Muscle_hypertrophy https://en.wikipedia.org/wiki/Strength_training I personally felt a little offended when my friend thought I lacked emotional intelligence to understand him. My non-literal thinking friends often think I'm deliberately being difficult and dense. Your friends are right. Your disputes here are entirely semantic, and you are distracting from the substance and emotional flow of the conversation to go on trivial tangents for your own personal gratification, at your friends’ expense. how do you manage it? Try to listen more carefully to what the other person is trying to say and understand . It helps if you care what the other person thinks and feels. Then target your responses to help meet your interlocutor’s emotional needs, rather than your own. For example, when someone is blowing off steam by complaining about a frustrating experience, don’t nitpick their grammar or start dishing advice. Instead, acknowledge their feelings and offer your sympathy. You might enjoy the book , or you could try .
In the case of abs though, I believe you can’t build much mass there, so it usually refers to tone by default. But I see your point. Also, if you like math to the point you’d enjoy math built into your literature, check out Jorge Luis Borges. Brilliant writer.
[ "How has the growth of computational power impacted research in mathematics?" ]
[ "math" ]
[ "8plicl" ]
[ 2 ]
[ "" ]
[ true ]
[ false ]
[ 0.75 ]
As a computer science/mathematics hybrid, it seems to me that a lot of research problems can be solved by programming. I know some cant, just by the nature of the problem, like finding primes etc. But what are some problems that computers cant solve where traditional methods might? I havent done research in math so I have no idea how it works.
As a researcher in computer science my feeling is that the fraction of research problems in high mathematics (or theoretical computer science...) that can be "solved" via programming is absolutely negligible.
Just speculation but I would imagine it has enabled entire fields of numerical analysis to get off the ground from theory to in practice, which of course motivates more study into it once all those applications become viable. Imagine being stuck with the Euler method in 50s because it's the best your state of the art computer at NASA can do. Forget about researching stuff like numerical PDEs .
I would say the increase of the number of educated people worldwide and ease of access to information has had the greatest impact. edit: to answer your question, the human brain is ( likely ) computable, or at the very lease imitable , so in principle in the long run computing alone will ultimately render humans obsolete . edit2: forgive my nitpick, you mentioned finding primes, all large primes were found by more or less brute forcing with the use of computers, I would argue that in fact this indeed a problem that is being investigated by programming. Now, if you mean that this sort of search will not produce a sort of formula for primes, a deeper understanding of the subset itself, then it's another story entirely.
I have found computers invaluable for gaining intuition quickly by very easily being able to check and play with a ton of examples. As George Andrews said, "computers are pencils with power steering".
Indeed. This afterwords I found in a dynamical systems book says so too, in the 2nd paragraph in the middle.
[ "Most \"beautiful\" Calculus textbook?" ]
[ "math" ]
[ "8pn5jz" ]
[ 268 ]
[ "" ]
[ true ]
[ false ]
[ 0.92 ]
This may sound like a weird question, so let me explain. I'm about to finish my second year in university studying computer science and I'm not very happy with my courses. Even though I know what a derivative or an integral is and I can solve a couple of exercises, I don't feel like I actually understand Calculus. Same goes for other subjects as well, but for the time being I chose Calculus as a good start to fix my problem. I feel as if the magic has escaped and the beauty of mathematics has been replaced with utter boredom. I need a textbook (or lecture-video series, or whatever) that doesn't just explain a couple of things, I want something that goes the extra mile, something more meaningful. I know the way I perceive it might sound strange, but oh well...
Spivak.
It’s a really nice proof-based intro to calculus, without becoming an analysis textbook. Spivak’s writing is also pretty witty and humorous.
I'd say with the definition/construction of ℝ.
Have you watched the 3blue1brown series on calculus. It would be a good place to start.
One way to look at it is that Calculus shows you all the cool and useful stuff you can do with limits like computing convergence of sequences and series, computing derivatives and integrals, etc. In Analysis you learn why you're justified in taking limits in the first place--starting by rigorously constructing the Reals and proving their completeness--as well as examining all the weird and pathological things that limits allow you to do.
[ "What Are You Working On?" ]
[ "math" ]
[ "8pl4zy" ]
[ 13 ]
[ "" ]
[ true ]
[ false ]
[ 0.81 ]
This recurring thread will be for general discussion on whatever math-related topics you have been or will be working on over the week/weekend. This can be anything from math-related arts and crafts, what you've been learning in class, books/papers you're reading, to preparing for a conference. All types and levels of mathematics are welcomed!
Making my way through the problems of Aluffi's Algebra chapter 0 so I can learn more algebraic topology in the fall.
Learning more efficient ways to tutor Calc III.
learning about crossed products of C*-algebras for the end game of learning about the irrational rotation algebra (e.g. simplicity, not AF). tried multiple times taking shortcuts and that not working, so I'm going linearly through Williams' book on it now... a pretty nice read so far
Aluffi is one of the only math texts I've really enjoyed self-studying and didn't have to force myself. Interesting problems, helpful exposition... destined to be a modern classic, I'm sure.
Easily the most readable textbook I've come across. Also makes the alg top/diff geo I've learned make so much more sense.
[ "Why is Additive Number Theory \"more difficult\" than Multiplicative Number Theory?" ]
[ "math" ]
[ "8pjeg5" ]
[ 64 ]
[ "" ]
[ true ]
[ false ]
[ 0.91 ]
As someone with very entry level knowledge of number theory (a summer grad level course), and not a lot of algebra background, I've read the general consensus is that the former is way more difficult to deal with than the latter. Why is that the case? I know that addition and multiplication give quite different structures on the sets we're working with, but is there a more in depth explanation for it?
You shouldn't put too much stock in the names as they have only the barest, most tenuous connections to the subject matter. That said, the definition of prime is multiplicative in nature: primes are defined in terms of their divisors. It's easier to prove multiplicative facts about primes than additive facts, just because primes are defined multiplicatively. Something like Goldbach's conjecture (which most people regard as additive) is hard because it involves adding primes. There is no natural reason why adding primes should be nice, and that's what makes the proofs so difficult.
Multiplicative number theory is just another name for analytic number theory. As the latter name suggests, you need analysis and number theory, and not much else, at least to start. Additive number theory is next-level, Fields-medal worthy subject matter. You need to know in order to prove even basic results in this area, which is why Terence Tao is so good at it -- he knows everything.
This is an utterly biased presentation and I'm not even working in analytic NT. On the other hand, fascinating though Green and Taos work is, I know a few outstanding number theorists who would consider it more combinatorics than number theory proper.
It's pretty easy to attack someone else's presentation, but if you disagree, perhaps you could present your own view to correct the record. OP by their own admission has only entry level knowledge which makes it impossible to go into great detail. But then again if one knew a lot about the subjects then one wouldn't be asking.
Multiplication is what happens when you do addition many times in a regular way. Multiplicative number theory is just the "easy" part of additive number theory, with a different notation.
[ "If you ask everyone in the earth to randomly choose a number from 1 through 100, what would be the most frequently chosen number?" ]
[ "math" ]
[ "8plbh0" ]
[ 1 ]
[ "Removed - try /r/askmath" ]
[ true ]
[ false ]
[ 0.53 ]
null
There's a tendency for humans when asked to pick a random number, to avoid special numbers. For example I would expect them to pick 1 and 100 less often. Try keep away from 50 and the middle and multiples of 10. Google turns up this page which claims that out of 10 humans choose 3 and 7 disproportionately often. Out of 100 they pick 37 the most.
42
If it is truly random, there will be a uniform distribution
This seems like a question best answered empirically than theoretically. Why not start a poll?
There will probably be a lot of person giving their " lucky number" so you ll probanly have peak to some "culturaly lucky " numbers like 7 in europe
[ "What is the \"Naïve Set Theory\" of other branches?" ]
[ "math" ]
[ "8pj700" ]
[ 6 ]
[ "" ]
[ true ]
[ false ]
[ 0.64 ]
NST is a short and concise, rigorous yet informal treatment of the essentials of set theory. What are the counterparts for other subjects? Edit: ITT asking for book recommendation but got a physics bashing circlejerk instead
Most of Physics is Naive Analysis pretty much…
Let's just throw away higher order contributions without scaling the problem first. Have a sum of integrals? Let's change that to the integral of the sums without checking monotonicity, nonnegativity, absolute convergence, dominated... What do you mean 𝛿 is a distribution? Pfft, this isn't statistics. Just let me have my function.
lol @ physics bashing circlejerk
Just let me have my infinite at one point and zero everywhere else function. Oh god it's so true. And then we all cry when the maths says we have to assess the delta function where it's 'infinity' then we sweep it all away by dividing by a different infinity whenever what we're calculating should be observable. The upshot of this kind of nonsense is some of the most accurate predictions ever made by science. We're talking correct to thirteen significant figures accurate. But also some of the worst predictions in science, we're talking wrong by 120 . Physics is hard guys.
OK, so you were really asking for that are good introductions to their subjects. Here are a few: Milnor, (also mentioned by an earlier response) and Serre, and Atiyah & MacDonald, Guillemin & Pollack, May, , is good and short but it is dense. Not an expository classic like the ones listed above. Is this more the kind of thing you were asking for?
[ "Tips for aspiring mathematician." ]
[ "math" ]
[ "8pinqb" ]
[ 22 ]
[ "" ]
[ true ]
[ false ]
[ 0.9 ]
So I recently had my first brush with reading and writing proofs in linear algebra and it didn’t go as smoothly as I would like. I’m set to take abstract algebra and real analysis but my proof writing is fairly weak. How can I get better at writing proofs and would t be doable to take both of those classes in the same semester?
How can I get better at writing proofs Practice makes perfect, also answer questions on Math-StackExchange ;)
Proof writing is a skill that is difficult to master. Even people with tons of experience tend to make errors. So, how can you get better?
Read textbooks carefully! A lot of the time, I think the issue people have with writing proofs is that they don't know what sort of style their writing should have, or how much they should include vs leave out of the proof. If you read all the proofs in your textbook, you'll get a good feel for what a proof should sound like, what you should include in writing a proof, etc.
Proofs are really just formally structured logical arguments. To really get good at them, you have to get good at thinking logically On the most zoomed out scale, you have to know where you're starting from and where you're going to. Once you have that down, you can start figuring out different paths that connect the two points. Through experience you will develop an intuition about which paths are more likely to work and which are dead ends. I would recommend the book The Art and Craft of Problem Solving. It will certainly help familiarize you with the bizarre mathematical landscape, along with giving you a whole bunch of tools (many of which are quite unorthodox) that will help you solve problems--and prove that they're solved.
I don't I know for sure. But I request you, could you kindly elaborate your point of view? Perhaps then people will understand your remarks.
[ "What are some other interesting characterizations of the set of integers Z?" ]
[ "math" ]
[ "8pigl3" ]
[ 8 ]
[ "" ]
[ true ]
[ false ]
[ 0.8 ]
I find it pretty interesting that the set of integers has many characterizations, such as: The set of integers can also be defined by using a modified version of the Peano axioms. It is a nonempty set Z together with a bijective function s:Z→Z which satisfies the following properties: Are there any other interesting characterizations of Z?
Your different descriptions are not all of the same object: Z as a , as a , and as a are not the same thing! For example, as a you can swap 1 and -1 (the additive group Z has a nontrivial automorphism) but as a you can't. When you speak of characterizing the "set" Z, it is just a countable set and all countable sets are in bijection with each other: Z and Q are equivalent when viewed merely as sets, for instance.
Choose a basepoint in the circle and let Ω denote the space of loops in the circle starting and ending at that basepoint, i.e. continuous maps [0, 1] -> sending 0 and 1 to the basepoint. This is a topological group under composition of paths, so its set of connected components inherits a group structure; the latter group is given by sending a loop to its degree. Is this contrived? Sure, but it or something closely related to it appears in many places in topology; for example, Guillemin and Pollack's differential topology textbook has a topological proof of the algebraic closure of , which uses this description of .
Regarded as sets, on the other hand, (more appropriately, ω* + ω) differs from (more appropriately, η) quite a bit!
One of my favourite constructions of the integers is that it is the pullack of the diagram ℚ→ℚ⨂∏ℤ_p ←∏ℤ_p Where the product ranges over all primes, and I have denoted by ℤ_p the p-adic integers.
How do you construct Q and Z_p without already having constructed Z?
[ "In the real world, would there be an infinite number of derivatives to distance/velocity/acceleration?" ]
[ "math" ]
[ "8phxk4" ]
[ 4 ]
[ "" ]
[ true ]
[ false ]
[ 0.75 ]
The change in acceleration is jerk, the change in jerk is jounce, etc. In the real world, could you theoretically derive the change in distance infinitely, since a real world distance-time equation would be infinitely complex? Or is there some kind of limit to the degree of a real world distance-time polynomial, that leads to a limited number of derivatives? Not sure if I explained it well, but basically, are real world distance-time expressions infinitely complex?
Remember that mathematics just models the real world, but is not the real world. Given certain conditions, a function is infinitely differentiable . You might use such a function to model the displacement (that is, distance from a fixed point) of a moving object, and then you can use that to model the object's velocity, acceleration, jerk, etc. But these are just models, or approximations: they are not the displacement, velocity, etc. (In fact, quantum mechanics prevents these from being measured exactly.) So yes, you could derive any number of these derivatives, but they may not be accurate or even mean anything, and making your model of the object's displacement as accurate as possible may lead to discontinuities in higher derivatives, meaning it is not infinitely differentiable after all.
I think that is misrepresenting platonism somewhat. Platonists believe (present tense, this is a widely held belief) that mathematical ideas are every bit as real as the physical world, despite being abstract and devoid of any kind of physical existence themselves. In other words, they form A real world, not THE real world.
... quantum mechanics prevents these from being measured exactly. The quantum wavefunction, as I understand it, is continuous (pretty sure of this) and infinitely differentiable (less sure of this) even if the measurements are discrete ("measurement problem") or uncertain (Heisenberg principle). In fact both General Relativity and Quantum Field theory assume space time is a continuum. Space-time may be ultimately discrete and/or not infinitely differentiable, but that is speculative and we won't know until we have a theory of Quantum Gravity.
Yes, I was rather oversimplifying. Thanks.
[irrelevant] Remember that mathematics just models the real world, but is not the real world Some people used to disagree edit: as commented below, not exactly true
[ "How to get really interested in mathematics?" ]
[ "math" ]
[ "8phyci" ]
[ 2 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
How to get really interested in maths? I'm a high school senior and I have participated in regional and national math olympiads and did quite well in them, but in retrospect I don't think I'm really really interested in maths. I know some batchmates who are like so much interested in maths that they're always thinking about it, and always thinking about the problems and are obessed over them ( even when they're eating they're doing math in a napkin!) while I'm wasting my time browsing lolcat pictures and memes :P How to be such deeply interested in maths and get a sense of thrill doing it?
There are many different flavors of math and it really depends on what you like. I myself really like thinking and asking questions about shapes or shape-like structures. For example, take the classification theorem for surfaces that says that up to a continuous transformation (i.e. stretching and reshaping without tearing or creasing), all surfaces can be classified as either a planes, a torus of genus g (a "donut with g holes") or a shape consisting of k Möbius strips. Understanding why that is and what implications this statement and likewise statements have gives me a lot of motivation to work in mathematics. On the other hand, a colleague of mine really likes working with differential equations: Being given a problem (i.e. a differential equation) and then finding out what conditions need to be satisfied to obtain a desired result. He keeps talking about various theorems that he finds amazing because they provide the existence and/or uniqueness of a solution given a set of prerequisites. What I'm saying is that there is no "this is why you should be interested in maths" because people have different tastes, even more so because saying "I'm interested in math" is like saying "I'm interested in sports": Sure, you like physical exercise, but you can't really develop a deep interest in sports since that's such a broad term, you're usually very inested in, say, soccer or something along those lines. In order to develop a deep interest in maths, you should try reading up the various different fields of maths and finding something that tickles your fancy.
napkins a pretty good primer but it lacks decent exercises. i'd recommend getting a hold of an algebra textbook (D&F, artin, etc.) and start working through it to really learn group theory well. for group theory there's a video lecture series that goes along with artin pretty well (type into youtube: harvard abstract algebra). if you're blowing through the group theory, then go onto rings. the first two books cover as much ring theory as it does groups. alternatively you could start working through baby rudin or some other analysis book to see if you like that style of mathematics.
Do you recommend some books which are accessible to high schoolers for this ? I know some basic stuffs (eg college first year group theory, number theory, combinatorics etc) but nothing too advanced. Thanks !
how much group theory do you know? i.e. what topics
http://web.evanchen.cc/napkin.html (I, II, III in that book)
[ "Simple Questions - June 08, 2018" ]
[ "math" ]
[ "8pl50v" ]
[ 8 ]
[ "" ]
[ true ]
[ false ]
[ 0.83 ]
This recurring thread will be for questions that might not warrant their own thread. We would like to see more conceptual-based questions posted in this thread, rather than "what is the answer to this problem?". For example, here are some kinds of questions that we'd like to see in this thread: Can someone explain the concept of maпifolds to me? What are the applications of Represeпtation Theory? What's a good starter book for Numerical Aпalysis? What can I do to prepare for college/grad school/getting a job? Including a brief description of your mathematical background and the context for your question can help others give you an appropriate answer.
This is either a troll post or you've been trolled yourself. This problem is the famous unsolved Collatz conjecture.
Most of the time, order is important. For instance, these two actions are rather different: Frequently in math, "do thing A, then thing B" is different from "do thing B, then thing A". Anything else is an exceptional case. If you want to get a better intuition for this type of thing, just plug in some numbers! It's easy to forget that these formulas with x and y and z work for actual numbers, but playing with actual numbers is very important. For instance, if you let x=1 and z=1, then (x+z) = 2 = 32, but x + z = 2. If you work with these concrete examples enough, it will seem much more natural to you. What if you want to multiply (10001 * 9999)? This would suck to work out long-hand, but we could also write it as (10000+1)*(10000-1), and then we can find the answer easily by distributing. Keep asking "why?", it's a great habit.
Most of the time, order is important. For instance, these two actions are rather different: Frequently in math, "do thing A, then thing B" is different from "do thing B, then thing A". Anything else is an exceptional case. If you want to get a better intuition for this type of thing, just plug in some numbers! It's easy to forget that these formulas with x and y and z work for actual numbers, but playing with actual numbers is very important. For instance, if you let x=1 and z=1, then (x+z) = 2 = 32, but x + z = 2. If you work with these concrete examples enough, it will seem much more natural to you. What if you want to multiply (10001 * 9999)? This would suck to work out long-hand, but we could also write it as (10000+1)*(10000-1), and then we can find the answer easily by distributing. Keep asking "why?", it's a great habit.
Radix is latin for root.
Here the OP's username is weirdly relevant...
[ "How does logic and intuition relate together when problem solving?" ]
[ "math" ]
[ "8phbl4" ]
[ 3 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
We are taught in math classrooms to follow the algorithm or write the rigorous proof, but sometimes taking a leap of faith works, and then the steps are added in between. What should come first in problem solving? Is thinking logically first too slow? Is thinking intuitively first too sloppy?
Personally I prefer intuition first. It allows me to get my thoughts on paper and is my primary source of insight . That being said you need rigour afterwards to ensure that everything is correct and that your intuition did not lead you astray.
Some people increase intuition by mastery of the algorithm. Others are naturally curious and explorative and intuition is a result of discovery. In my personal opinion, in one way or the other, intuition comes from a sound foundation in logical reasoning.
There's no hard-and-fast rule here, but broadly speaking, it is usually best to start with some kind of intuitive reasoning, and then translate it into a formal argument. On problems of significant size, it's too easy to get lost starting blind with formal logic. Terence Tao has a very good post on the topic.
Solve some problems and see for yourself how you think about them
I think Terry Tao sums this up perfectly .
[ "Challenging linear algebra problem involving unit circle. I’m stumped and thought someone here may be able to figure it out." ]
[ "math" ]
[ "8pgwa0" ]
[ 33 ]
[ "" ]
[ true ]
[ false ]
[ 0.97 ]
Find a collection of vectors M = {v1, v2, ..., vn} in R2 with the following properties: The sum of the vectors is 0. That is v1 + v2 + ... + vn = 0. The length of each vector is less than or equal to 1. ||vi|| ≤ 1 for each i. For every ordering of the vectors there is a k between 1 and n with ||va + vb + ... + vk|| > 1.That is, the path obtained by the vectors must leave the unit circle no matter the ordering. A non-example: M = {[1, 0], [0, 1], [−1, 0], [0, −1]} because the ordering [1, 0] + [−1,0] + [0, 1] + [0, −1] never leaves the unit circle.
Let M be empty :-). Alternatively, take (1, 0) and n copies of (-1/(2n), sqrt(1-1/(2n) )) and (-1/(2n), -sqrt(1-1/(2n) )) for n large, say n=100. If you take (1,0) first, you can't follow it with anything. If you take one of the others first you have to follow it with the other one, then (1,0), and then you're stuck again.
It doesn't have to work for all k from 1 to n. The problem is, for every ordering, the sum leaves the unit circle at some point after you start adding them together in that order.
What they have as an example uses effectively 4th roots of unity and doesn't work.
Maybe use the roots of unity? (Identifying the complex plane with R .)
You can't though. After you've used one of (-1/(2n), sqrt(1-1/(2n) )) you have to follow it by (-1/(2n), -sqrt(1-1/(2n) )), and then another (-1/(2n), sqrt(1-1/(2n) )) would take you out of the circle.
[ "What is your favorite proof of the law reciprocity quadratic?" ]
[ "math" ]
[ "8pgvwj" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.73 ]
[deleted]
First of all let [;q^* = (-1)^{(q-1)/2}q;] so that [;q^*\equiv 1 \pmod{4};] and we want to show that [;\left(\frac{q^*}{p}\right) = \left(\frac{p}{q}\right);] . A lot of proofs hinge on the fact that [;\sqrt{q^*} \in \mathbb{Q}(\zeta_q);] , either explicitly or implicitly. There are a number of different ways to show that. My favorite is to just note that by Galois theory [;\mathbb{Q}(\zeta_q);] has a unique quadratic subfield, which must be ramified only at [;q;], as [;\mathbb{Q}(\zeta_q);] is ramified only at [;q;]. The only such quadratic extension of [;\mathbb{Q};] is [;\mathbb{Q}(\sqrt{q^*});] . After this, there are a number of ways to conclude QR from that. My favorite way is to just use basic facts about Frobenius elements . For any abelian(*) extension [;K/\mathbb{Q};] and any [;p;] not ramifying in [;K;], we define [;\operatorname{Frob}_p\in\operatorname{Gal}(K/\mathbb{Q});] to be the unique element such that [;\operatorname{Frob}_p(x)\equiv x^p \pmod{P};] for a prime ideal [;P;] of [;K;] which lies over [;p;]. (*) it's not actually necessary for this to be abelian, but if [;\operatorname{Gal}(K/\mathbb{Q});] was not abelian, [;\operatorname{Frob}_p;] would only be defined up to conjugation, which would make things a little harder to work with. It's not hard to see that: [;L=\mathbb{Q}(\zeta_q);] [;G:=\operatorname{Gal}(\mathbb{Q}(\zeta_q)/\mathbb{Q})= (\mathbb{Z}/q\mathbb{Z})^\times;] [;\operatorname{Frob}_p = p \pmod{q};] [;K = \mathbb{Q}(\sqrt{q^*});] [;H:= \operatorname{Gal}(\mathbb{Q}(\sqrt{q^*})/\mathbb{Q})= \{\pm1\};] [;\operatorname{Frob}_p = \left(\frac{q^*}{p}\right);] But now as [;\mathbb{Q}(\sqrt{q^*})\subseteq \mathbb{Q}(\zeta_q);] there is a surjective map [;f:G\to H;], and it is not to hard to see that [;f(\operatorname{Frob}_p) = \operatorname{Frob}_p;] , so [;\left(\frac{q^*}{p}\right)=1;] iff [;p\in\ker(f);]. But as [;G;] is cyclic, [;\ker f;] has to be the unique index two subgroup of [;G=(\mathbb{Z}/q\mathbb{Z})^\times;] i.e. the group of quadratic residues. So [;\left(\frac{q^*}{p}\right)=1;] iff [; \left(\frac{p}{q}\right)=1;] completing the proof. I like this because it really gets to the heart of what QR is, namely a statement about how the various Frobenius elements behave in quadratic number fields, and it highlights the key fact that makes it work, namely that quadratic number fields are contained in cyclotomic fields. At the same time, it doesn't rely on any big hammers, like class field theory. There are certainly a lot of more elementary ways to prove it, but I feel like most of those aren't very illuminating. It's kind of hard to see what's really going on without looking at the proof in algebraic number theory terms.
The one with the Galois theory of cyclotomic field extensions: https://en.wikipedia.org/wiki/Proofs_of_quadratic_reciprocity#Proof_using_algebraic_number_theory
I was going to post this. This proof is really eye-opening.
Gauss's third proof. [ , 1808] Google Books scan
I don't particularly have a favourite one but an academic sibling of mine (who did his PhD many years earlier than I) gave a fairly elementary proof of it (and afaik its the most recent novel proof): https://www.jstor.org/stable/4145015 ? seq=1#page_scan_tab_contents
[ "Linear Algebra Question, As Applied to Electrical Engineering" ]
[ "math" ]
[ "8pfukg" ]
[ 0 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.25 ]
null
Sure is.
correct
learnmath allows images, just not . Make a text post there including the URL of your image and actually ask a question in the text post.
Hello, I'd suggest you search "Kirchhoff's Laws." There's plenty on YouTube that explain this.
I know learn_math would be a better place for this, but they don't allow images, and it is hard to explain without the image. Let me know if you can solve this one, or at least can explain it in a way that I can understand it. It's from a linear algebra practice exam. We didn't go over anything related to electrical engineering, and I have no clue what Kirchoff's Laws are, but I'm assuming I can just plug the linear system into a matrix and solve like I would normally, correct?
[ "Why Triangles Are Better Than Quadrilaterals (An Argument in Three Parts)" ]
[ "math" ]
[ "8petos" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.5 ]
null
Note that 'trigonometry' (despite the name) is actually largely focused on the relationship between circular arclength and square grids.
Should such an argument even exist?
I thought this was just an established fact
https://amzn.com/9810206003/ Indeed it's unfortunate how so many prefer the Quadrilateral. I was under that assumption as well until I asked around.
https://hexnet.org https://amzn.com/9810206003/
[ "What areas of research in Algebraic Topology are still active?" ]
[ "math" ]
[ "8peekm" ]
[ 27 ]
[ "" ]
[ true ]
[ false ]
[ 0.91 ]
I was told to learn Algebraic Geometry by a few Algebraic Topologists since there aren't many opportunities for research unless one resorts to problems within Algebraic Geometry. Looking through the papers of known Algebraic Topologists, I've noticed that most are researching Chromatic Homotopy Theory, K-Theory (Algebraic + Topological), and TQFT. Are there any other major active research areas in Algebraic Topology? My research interests are Algebraic Topology and Commutative Algebra (haven't studied AG) so it seems Derived AG is the area for me?
I'm a PhD student in my final year and I also work in algebraic topology and commutative algebra. My advisor works in homotopy theory (specifically motivic homotopy theory) and it is still very much an active field of research with several interesting open questions there. Personally, I work with something called Mackey Functors which are the "equivariant analog to abelian groups, rings, and modules" (and I'll try to explain a bit about what that means). When you first learn algebraic topology you learn about singular homology and cohomology which is a bunch of abelian groups and rings. Deeper in you develop cohomology operations and learn about the steenrod algebra and modules over that algebra. This is to say that when you're trying to measure topological spaces, the "right" tools to use turn out to be abelian groups, rings, and modules. There is a slight abstraction of topological spaces called "G-equivariant topological spaces" which are standard topological spaces equipped with an action of a group G in some way that is topologically continuous. You can talk about G-equivariant continuous maps, homeomorphisms, homotopies, and any of the other things that you normally get with spaces. You can then try to derive algebraic topology all over again in this new category. Unfortunately, groups/rings/modules don't really work any more. When attempting to measure G-equivariant spaces the "right" tools to use turn out to be something called Mackey functors, Mackey rings, and Mackey modules. I won't try to explain what those are, but there's still a lot to learn about them. In fact, my work is restricted only to the case when G is the cyclic group of order 2. My partner also works in homotopy doing something completely different. They are working with model categories and specifically something called the Grothendieck topology on categories. This is a largely unexplored area of math that I don't understand very well but it has far more questions than answers at this point. I think it's common for topologists to get shuttled into Algebraic Geometry. It is a really active area of research right now with a large community. It might also be the case that at certain schools have shifted entirely to algebraic geometry and don't have anyone working in topology any more. That being said, there's still plenty of work to be done in algebraic topology and commutative algebra.
Differential topology and geometry in the presence of a group action is a classical reason, but these days there are other exciting applications, such as the work of Hill-Hopkins-Ravenel using equivariant homotopy theory to resolve the Kervaire invariant 1 problem in nonequivariant differential topology; the use of -equivariant homotopy theory to study THH (which itself apparently has applications to algebraic -theory and arithmetic geometry that I don't understand). There also seems to be a connection to theoretical physics, where the group action corresponds to a symmetry of the physical system, but less is known about that potential application.
Maybe start here? https://arxiv.org/list/math.AT/recent
Why do we care about G equivariant spaces?
Topological Hochschild homology. (Heads up: I've just told you almost everything I know about THH)
[ "Terminology: set of sets that have a subset total ordering?" ]
[ "math" ]
[ "8pdxlh" ]
[ 4 ]
[ "" ]
[ true ]
[ false ]
[ 0.84 ]
Is there a name for a set of sets where the subset/inclusion operator provides a total ordering?
"Totally ordered by inclusion"
For any ordinal r, the collection of all sets with rank less than r (in the usual von Neumann hierarchy) is a transitive set. But for any r > 2, V_r is not totally ordered by subset inclusion. These two properties actually combine nicely: a set is an ordinal iff it is both transitive and totally ordered by inclusion, and sometimes this is even used as the definition of "ordinal".
The set you describe could and is not infrequently called a nested family of sets.
Thanks for the correction.
This may be equivalent to a transitive set ?
[ "Are there any unsolved mathematical problems that mathematicians today no longer care about what the answer is?" ]
[ "math" ]
[ "8phdts" ]
[ 391 ]
[ "" ]
[ true ]
[ false ]
[ 0.98 ]
Specifically, the unsolved problem should be an actual problem that mathematicians in the past have considered, not just a giant addition problem that no mathematician has considered before.
Erik Christopher Zeeman tried for 7 years to prove that one cannot untie a knot on a 4-sphere. Then one day he decided to try to prove the opposite, and he succeeded in a few hours I just feel so bad for the guy.
I get the feeling that there are tons of mathematical backwaters. Like in the 18th century, mathematicians were obsessed with continued fractions, whereas today they're mostly viewed as a curiosity. There are probably lots more, but I can't name them because they're no longer studied and I'm not a math historian. And they probably all left behind with open problems.
Another way to look at it is that he had seven years of practice at disproving the statement.
I wouldn't exactly say that mathematicians no longer care about it. I'd say it's more like we've all accepted that it's probably completely intractable with our current methods, and so people have mostly given up on it for now.
I wouldn't exactly say that mathematicians no longer care about it. I'd say it's more like we've all accepted that it's probably completely intractable with our current methods, and so people have mostly given up on it for now.
[ "Honeybees Seem To Understand The Notion Of Zero" ]
[ "math" ]
[ "8pczzu" ]
[ 575 ]
[ "" ]
[ true ]
[ false ]
[ 0.95 ]
null
Well, that settles it then, zero is a natural number
I really chuckled at that.
yeah, they mentioned that human toddlers don't understand zero but I wonder how they'd do with the same experiment
yeah, they mentioned that human toddlers don't understand zero but I wonder how they'd do with the same experiment
The NPR article is written for a layperson audience, if you click through to the real article you can see that there is actually a lot of existing theory about when we can say nonvocal animals are able to "count". I can't get to the pdf on my home PC, but maybe someone with university access can give some more details?
[ "Research Debt" ]
[ "math" ]
[ "8pc90o" ]
[ 8 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
null
I agree with the overall thrust of this article and found it interesting, but I'd like to comment on a few things. I'm a postdoc in math rather than machine learning, but like the OP said I think a lot of this stuff is relevant to math. Everyone takes it for granted, and doesn’t realize that things could be different. people in my mathematical circles gripe about this all the time. Maybe that's not true in machine learning? On one extreme, the explainer can painstakingly craft a beautiful explanation, leading their audience to understanding without even realizing it could have been difficult...In these one-to-many cases, each member of the audience pays the cost of understanding, even though the cost of explaining stays the same. I basically agree with this, but I think many people overvalue "beautiful explanations." On this sub people like to say that math "isn't a spectator sport" -- you have to do it to learn it! Having something beautifully explained to you is often not sufficient. But maybe "explanation" can be broadened to include, for example, well-crafted problem sets. Something important but not mentioned by the author is the advantage of learning as part of a "many," i.e a community of learners. Every teacher has had the experience of a student an explanation better than your own. (Sometimes it's frustrating that the student is a hungover 18 year-old.) The same goes for research debt -- it is much easier to work through a tough or poorly-written paper with a colleague. Lots of people want to work on research distillation. Unfortunately, it’s very difficult to do so, because don’t support them. The emphasis is mine. Who is "we?" I am a postdoc. I have never supported any mathematician financially. I have never hired anyone or given anyone a raise (in the math world, at least). So who is responsible for this state of affairs? Who would lose in trying to solve it? The author does not ask this question, and I think it's the central one. I also think the answers might be different in math than in ML. (I have a gripe about bringing up tau/electrons/notation but this is my bus stop)
It's not a question of funding. Even if funding for this were made available, the simple reality is that publishing new results is going to count for more as far as career advancement than any sort of distilling ever will. Maybe we should encourage senior mathematicians who have gotten past the point of wanting to do cutting-edge research to write more textbooks. Certainly my field would benefit tremendously from some distilling. But I would be screwing myself over career-wise if I spent the two years it would take to actually distill things into the form they ideally ought to be in (I mean, I have a rough draft of a textbook already but I know full-well that I shouldn't spend any more time on it until at least I have tenure and really not until I'm at least at the rank of full professor). You can argue that people should give more weight to the distilling than they do, but this I'd disagree with: research progress is what matters the most.
I wish that that could happen. But the growth in teaching you mention translates to adjuncts getting paid roughly what I got paid as a grad student. I don't think this is even remotely ok, I'm just calling it as it is.
In addition to many dedicated teaching faculty being adjuncts, it seems that more often the TT teaching faculty are rated more on the number of buzzwords they can introduce to their teaching statements and how many single-variable calculus textbooks they can write. The system has not put them in a place to have anything to do with material above a second-year undergrad level in my experience.
It's way too big. I'll (probly) give a better answer tmrw but really: It's far too much to ask. And I say this literally having a 400 page rough draft of a book in hand.
[ "Looking for some guidance" ]
[ "math" ]
[ "8pcaqq" ]
[ 2 ]
[ "" ]
[ true ]
[ false ]
[ 0.75 ]
[deleted]
If Erdos were alive, you would invite him over to your house to stay for some days, sleep on the couch or whatever. He would talk to you, figure out how much you knew, pose a problem for your level, and the two of you would figure it out, and you'd be a coauthor on his paper. He mostly did this to other mathematicians, but I believe I read that on some occasions he also did this with high school students. But barring a miracle, another Erdos, or whatever. I would say looking for a research problem is a waste of your time. Instead, make your project to understand a topic from a more accessible subject. If your long term goal is to eventually to research in high energy physics, then make your goal today to be to understand some linear algebra. Maybe write a paper explaining the spectral theorem. Even for undergrads, even for beginning grad students, research level topics are out of grasp. Learn to walk, etc...
I'm googling Erdos biographies online and none of them mention this so I may be misremembering. Maybe it was just that he corresponded with children, posing math problems for them, not that he took them as co-authors. Anyway, yes, no doubt he was a nice guy. Or an eccentric weirdo. Or both.
Seems like erdos was a nice guy :)
I know he did it with Terry Tao but I can't think of any other mathematicians.
I remember reading or hearing something similar so you might be right about that. No idea where it's from though.
[ "Just started reading Shey's \"Div, Grad, Curl, and All That\". I sincerely hope more sass is to come!" ]
[ "math" ]
[ "8pc4u0" ]
[ 0 ]
[ "Removed - see sidebar" ]
[ true ]
[ false ]
[ 0.33 ]
null
can we see the quote sentence too in which "formal" is used in this context?
Why are you asking me, who wants to know the context of the screenshotted footnote, instead of OP who is in possession of the book and is reading it? But for the record, I'll say in my personal experience discussions of flux and divergence and curl in physics books, EM book, give just as good an intuition as mathematical texts, which just end up referencing the same concepts. If you found it difficult to derive intuition from the physics textbooks, I'm not sure a pure math based approach will do any better.
I came to vector calculus through Maxwells Equations rather than via any formal Mathematical training. Would you recommend this text to develop a better grounding in div curl grad and all that jazz. I like the sassy approach.
I'm sorry, are you quoting the text? The author asks the reader "would you recommend this text"??? Huh?
lol no. I am asking whether Schey is a good textbook on vector calculus. Is it a book that can offer a deeper understanding to someone who has had to pick up vector calculus on the side and feel like he is making up most of it as he goes. When I studied and used Maxwells equations I never felt that I had developed an intuitive feel of the mathematics that described them. I am wondering whether Schey's book will offer that.
[ "working on a new algorithm to calculate and understand PI" ]
[ "math" ]
[ "8pbtaw" ]
[ 14 ]
[ "Image Post" ]
[ true ]
[ false ]
[ 0.56 ]
null
This is basically Archimedes' method for estimating pi . While this gets you an ok estimate, you need to go to huge amount of subdivisions in order to get very good estimates. Nowadays we have far better methods for approximating pi than just doing this: https://en.wikipedia.org/wiki/Approximations_of_%CF%80
What do you even mean by that? Are you talking about using a prime number of sides? That wouldn't get you anything more accurate. An 18-gon would give you a better approximation than a 17-gon. Methods of approximating pi are things mathematicians have studied a lot. I would recommend you familiarize yourself with what has already been done before trying to discover something new. You aren't going to get something better than the Chudnovsky algorithm by just playing around with n-gons in circles like this.
I think he's just using lots of cool words thrown together in the hope that it yields better results (spoiler alert: it won't).
Why primes?
Approximations for the mathematical constant pi (π) in the history of mathematics reached an accuracy within 0.04% of the true value before the beginning of the Common Era (Archimedes). In Chinese mathematics, this was improved to approximations correct to what corresponds to about seven decimal digits by the 5th century. Further progress was not made until the 15th century (Jamshīd al-Kāshī). Early modern mathematicians reached an accuracy of 35 digits by the beginning of the 17th century (Ludolph van Ceulen), and 126 digits by the 19th century (Jurij Vega), surpassing the accuracy required for any conceivable application outside of pure mathematics.
[ "Stokes' Theorem" ]
[ "math" ]
[ "8pbaq6" ]
[ 52 ]
[ "" ]
[ true ]
[ false ]
[ 0.95 ]
Second year physics major here How do you interpret/conceptualize Stokes' Theorem? It's such a large piece of contemporary mathematics + physics that I'd like to see how others think of it, beyond its technical definition. Edit: (Also mentioned in a comment) This thread was great guys. I hope that it serves as a useful reference for others, as it will for me. Incorporating something like Stokes' Theorem into one's intuition, as a mathematician/physicist/etc, can often take a while. Hearing your perspectives was great!
I always thought the simple picture on wikipedia explained the idea better than anything else. https://en.wikipedia.org/wiki/Stokes'_theorem#Underlying_principle The internal curl all cancels out so you just see what is along the boundary (which is the line integral).
I started in physics, yet pure math has corrupted me and I now visualize it in terms of the Generalized Stokes: ∫ f = ∫ ∇(f) What it means is that if you integrate something which is a derivative of some kind (∇(f) for a class of differential operator ∇ which is like the Gradient, Curl, Divergence, or even one-dimensional d/dx) over some notion of "volume" M as on the right hand side above, you can "cancel" the derivative and the volume integral and only do a lower-dimensional integral over the border which we denote ∂M. It's kinda like just moving the "d". In 1 dimension it's the fundamental theorem of calculus: ∫ f'(x)dx = f(b)-f(a) It's just evaluating on the border of the interval you're integrating over.
It's all about the swirly
I view Stokes' Theorem as a multidimensional version of the Fundamental Theorem of Calculus: the integral of a derivative of a function on a surface is just the "evaluation" of the original function on the boundary (for suitable generalization of derivative and "evaluation").
Going the other direction, in a really abstract way, Stokes' theorem is the statement that the boundary operator on manifolds "looks like" a differential operator- that is, it's the formal adjoint of the exterior derivative.
[ "Math PhD. Is it worth it?" ]
[ "math" ]
[ "8palim" ]
[ 4 ]
[ "" ]
[ true ]
[ false ]
[ 0.56 ]
Straight up. Is it worth it? Having to publish so many papers, working for tutoring for free (at least at my university) , having no life but mathematics. Takings years to solve 1 problem....discovering something new. I know it is hard, grueling, time consuming....but my motto has always been "If it's truly worth it, it isn't easy". New math PhD's.....was all that you went through 100% worth it for the knowledge you gained, the truth you now know, the skills you have developed ?
What do you want to get out of a PhD? I think that's the best question to ask yourself if you want to decide if it's worth it or not. I'd argue that a PhD isn't really only about "learning more," or "gaining knowledge". A lot of the things you're implying are painful are what people like about the PhD. Getting to discover new stuff/publishing papers was my motivation for entering a PhD program. Learning new math is something I could do anywhere, but I want to change from someone who could learn mathematics to someone who could create it. I also want to be employed as an academic mathematician, which requires a PhD. I'm not done yet, but right now I'd rather be nowhere else.
she thought we were worth more than "pure" mathematicians and she started talking a bit about her own life and experience in math and academic jobs. This is so ridiculous that it's not even offensive. So what did you do with that 'extra worth'? Become a finance guy? An accountant? A wall street hustler? If you think math is a tool and not a calling then by definition being a mathematician is not for you. If you think making money is the goal to life then get a degree in financial accounting.
Having to publish so many papers, working for tutoring for free (at least at my university) , having no life but mathematics. Takings years to solve 1 problem....discovering something new. I know it is hard, grueling, time consuming... The "no life" part isn't really accurate, but the rest of it is, and honestly, if those other things sound more burdensome than alluring, that's not a great sign.
Some people value things over simply making money, such as love, making meaningful contributions to the body of mathematical knowledge or the world as a whole, fulfilling their goals, raising a family, having amazing life experiences, or making a positive impact upon the lives of others. You may think the most important thing is to accumulate as much wealth as possible before you die, but nowhere is it ever implied that it should be anyone's life purpose.
"If it's truly worth it, it isn't easy." But is the converse true? Ultimately, you have to enjoy it and it can't be a means to an end.
[ "Is there the equivalent of irrational numbers for 3 and 4 dimensions?" ]
[ "math" ]
[ "8p90yi" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.22 ]
It occurred to me that irrational numbers are only important in a world of two dimensions because some lengths are irrational. If the world was 1d all notions of distance can be captured with rational numbers because there are no holes in the number line to fix. Is it the case that complex numbers are needed for 3D applications? It's not a topic I know much about
Assuming you want to solve the 1-dimensional equation x - 2 = 0, you need irrational numbers.
Complex numbers come up in engineering in other contexts. It's very unusual to use the real part for one coordinate and the imaginary part for another (also, you're still a dimension short). Complex numbers crop up for instance anywhere you have a sinusoidal wave, or something else periodic that you decompose into sinusoids. It's a million times easier to write sinusoids as the real part of a complex exponential, and do your math on the exponential, and then take the real part again to get your real answer. For example, if you're trying to prove that cos(wt) + cos(wt + 120°) + cos(wt + 240°) = 0 for all t, you're going to have a mess of angle addition formulas and whatnot on your hands. Try to prove instead that exp(iwt) + exp(i(wt + 2pi/3)) + exp(i(wt + 4pi/3)) = 0, and you're basically done once you factor things out.
I'm not sure I quite follow, what do you mean by "there are no holes to fix"? Do you mean maybe that there would be no motivation for irrational numbers? The same way you may want to restrict yourself to studying the subset of rationals, you can perfectly well study the subset of geometric configurations where no irrational distance appears. Conversely, you can study points on the number line and see the emergence of irrational numbers .
No, the usual real numbers are sufficient to describe all lengths in any dimensions. The distance of a point (x,y,z) from (0,0,0) for instance is sqrt(x + y + z ) with the obvious generalization for higher dimensions. This follows from the Pythagorean theorem.
I think there are several things you are missing. First, the only motivation for the real numbers is not that some lengths in 2-d have irrational length. If this were the motivation, then the only numbers we would add in would square roots of rationals (and then square roots of all the new numbers and so on...). So at first you would add in sqrt(2) for instance and then you would need to add in 3+sqrt(2) and so on. However, this process will never get you every real number. For instance, pi and e will never show up because they are transcendental and the numbers we construct are all algebraic numbers (look the words up if you don't know what they mean). Rather, the motivation to add in real numbers is that we want our number system to be closed under limits. For instance, if I take a measurement of a length, then my accuracy will be limited by my tools. So let us say I want to measure the circumference of a circle of radius one. At first, I might take line segments of some length and approximate the circle using these (like this: https://goo.gl/images/m6L2BD ). You can see that as I use shorter and shorter line segments, I will get a more accurate value for the length of the circumference. But what does this actually mean? What this means is that I will measure consecutively the circle to have values say 6, 6.28, 6.2831 and so on and we more or less identify this sequence of numbers with the real number 2pi (which is the "actual" circumference). The key property about this sequence of numbers is that the difference between consecutive terms gets smaller and smaller and can be made as small as needed if I go far enough in the sequence. We identify such sequences with a real number (different sequences might correspond to the same real number). A decimal expansion is exactly this kind of sequence. The decimal number 0.1111111111111... for instance corresponds to the sequence 0.1, 0.11, 0.111 and so on. Once we have constructed the real numbers, the complex numbers fill in a difference purpose and don't really come into this picture of "filling in holes in a line".
[ "Is it just me or is Math Overflow one of the most unwelcoming, pretentious communities on the internet?" ]
[ "math" ]
[ "8p81qe" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.11 ]
[deleted]
Well, "MathOverflow is a question and answer site for professional mathematicians." Are you a professional mathematician yourself?
The thing to remember is that for many users on MO, asking a question is not too different than walking into their office when they're working and asking them the question. If your question is elementary, they'll be annoyed. When mathstackexchange can't answer your questions anymore, you'll know when to move.
That's not the point i was trying to make. The point i was trying to make was that because it's a community for *professional* mathematicians, the sort of knowledge and experience expected of contributors is going to be much higher than it would otherwise be. This means that there is less tolerance of contributions that aren't appropriately sophisticated (e.g. "But it doesn't make sense that 0.999... = 1!!!") , which in turn means community regulars might come across as 'pretentious' and 'unwelcoming'. i know from personal experience - not on MO, but more generally - that it can quickly get tiring to have people expecting you to devote your time and energy to providing responses that are essentially available via either Wikipedia or the Internet more generally, and which can easily be found via a simple Web search. (And clearly, if you're asking questions on an Internet-based forum, you have Internet access.)
I think it's just you. It's a really useful resource as a working mathematician, and also a good source of interesting questions and discussions. If what you're interested in is genuinely research mathematics, and you've made an honest attempt to solve the problem yourself, you won't have problems with your questions being closed. The problem usually is that not many people who are not research mathematicians understand which questions are appropriate for mathoverflow. This problem is normally worsened because when people who do understand this distinction tell the people who don't understand this distinction that their questions are not appropriate, the latter normally get annoyed. What those people should remember is that no-one is under any obligation to answer your question at all, and if the experts who you're asking for help on your problem don't think it's an appropriate problem, they're probably better placed to judge how they want to spend their time than you are. Just ask your question on Mathstackexchange instead, 99.99% of the time its what you should have done in the first place.
I have never find what you claim to be the case when the questions asked are MO-appropriate. Are you able to provide an example of the kind of questions you may have asked?
[ "Open discussion: Can anyone find a pattern to these expansions? Is there a way to derive a general function for sin(nx)?" ]
[ "math" ]
[ "8p95zy" ]
[ 253 ]
[ "Image Post" ]
[ true ]
[ false ]
[ 0.89 ]
[deleted]
You definitely can, and it's not particularly hard if you use de Moivre's formula, which states that [; \cos nx+i\sin nx=(\cos x+i\sin x)^n. ;] Using the binomial theorem, you can expand this as [; \cos nx+i\sin nx= \sum_ {k=0}^n \binom{n}{k}i^k \sin^k x \cos^{n-k} x=\sum_{k\text{ even}}\binom{n}{k}(-1)^k \sin^kx\cos^{n-k}+i\sum_{k\text{ odd}}\binom{n}{k}(-1)^{\frac{k-1}{2}}\sin^k x \cos^{n-k}x. ;] If we're interested in a general formula for [; \sin nx;] we can take the imaginary part of the above equation to obtain [;\sin nx=\sum_{k\text{ odd}}\binom{n}{k}(-1)^{\frac{k-1}{2}}\sin^k x \cos^{n-k}x ;] While this equation is very nice, it is not of the same form as the one you have worked out the first few terms of. However, we can get to your form by replacing all instances of [;\cos^2x;] with [;1-\sin^2x;] , which I'd think would be a nice little exercise.
Your writing is very pretty ☺️
Those are Chebyshev polynomials . In particular: T_n(sin(θ)) = ±sin(nθ) for odd n, plus or minus alternates. Or alternatively, U_(n-1)(cos(θ))sin(θ) = sin(nθ). It's even better for cosines: T_n(cos(θ)) = cos(nθ). Check out the list lower part on the Wikipedia page.
A lot of people would beg to differ. I am not one of them though
A lot of people would beg to differ. I am not one of them though
[ "Confused About Infinity" ]
[ "math" ]
[ "8p7klc" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.36 ]
There are an infinite amount of numbers between 0 and 1. But also an infinite amount of numbers between 0 and 100. Theoretically there should be 100x more numbers between 0 and 100 than 0 and 1, but the amount of numbers between 0 and 1 and 0 and 100 is exactly equal. I don’t understand how this is possible. Is this why the laws of physics break down on the quantum level? Because maybe our understanding of math breaks down at certain levels also? I don’t know, maybe a dumb question. But I’m just kind of confused.
You are correct: there are 100x as many numbers. However, cardinal numbers do not obey the properties you are used to from finite numbers: specifically, 100 times an infinite cardinal is the same cardinal, not a larger cardinal. So 100x as many is still the same amount. There's no contradiction. It's a bit weird, but you get used to it.
Just so you know, quantum physics has nothing to do with this. QFT breaks down because there are a few missing pieces we havent figured out so we need to cut off the theory at low scales (ie, what happens below the planck scale could be described by some new theory such as quantum gravity). It does not break down because of the cardinality of the real numbers
I dont think theres anything showoffish about using the right terminology, especially when OC linked to a description of what it means
Since when was a cardinal number a technical term?
You're talking to the wrong person, in addition I disagree with your idea of "things introduced at a university are considered technical" especially since Cardinals would be one of the first things a math student would learn in university. By your logic that makes The word "sets" a technical term
[ "How many unique groups are there of 7 unique items?" ]
[ "math" ]
[ "8p6qsi" ]
[ 0 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.22 ]
null
Pascal's triangle has the answers. If you look at the 7th row, it reads 1 7 21 35 35 21 7 1. 1 is the number of groups of no items. 7 is the number of groups of 1 item. 21 is the number of groups of 2 items. Et cetera.
I don't think that's what he meant (I don't even think he knows what a group is)
The number of ways to select m elements out of a set of n elements is given by the combination formula. Google that and pick the one that makes sense to you. Here’s the Wikipedia entry: https://en.m.wikipedia.org/wiki/Combination
Sure, "group" is a term of art in maths, and ordinarily I wouldn't use it with its everyday meaning in a mathematical context. But I think OP just meant 'set,' and I answered accordingly.
Non-Mobile link: https://en.wikipedia.org/wiki/Combination /r/HelperBot_ /u/swim1929
[ "What other fields are most strongly/traditionally associated with mathematics?" ]
[ "math" ]
[ "8p6wqn" ]
[ 5 ]
[ "" ]
[ true ]
[ false ]
[ 0.78 ]
For instance, you commonly see familiarity with these fields by mathematicians, and sometimes even people who worked separately in both mathematics and one of these fields. I'm thinking of things like physics, computer science, and philosophy. Of course, fields have a use for mathematics, but there seems to be less of a similar and strong modern or historical link between between say, math and chemistry, or economics. Are there other fields besides the three I mentioned, with associations I am unaware of?
I would expect that economics is actually pretty closely tied to mathematics. The "TV talking head" economist may do more punditry than math, but I don't think that's really representative of the field.
Economics is trending more and more into the math field. I saw a paper applying some complex analysis to Economic theory recently. In terms of applying to graduate school they like their applicants to have at least taken up to Real Analysis, with a decent background in differential equations (and often stochastic modeling). Game theory is also traditionally a sub field of economics, which is definitely within the math sphere of influence.
There's statistics, if you're willing to count that as separate from math.
Engineering.
Chemistry is a huge one.
[ "What are some of the best maths blogs out there?" ]
[ "math" ]
[ "8p76qr" ]
[ 142 ]
[ "" ]
[ true ]
[ false ]
[ 0.94 ]
I've found blogs like Terry Tao's and Tim Gower's to be interesting, what other ones are good?
I really like Math With Bad Drawings. It's not super technical, but it has a really good variety of things related to math: math education, math culture, current events in the math world, math jokes, etc. Plus it's often pretty funny.
Math ∩ Programming His blogs are detailed and readable. I like to read them on the train.
Shtetl Optimized if you're into complexity theory.
Here are some of the ones I follow: Qiaochu Yuan Carlos Matheus Michael Hutchings Burt Totaro Igor Pak Alex Youcis Low dimensional topology Jordan Ellenberg Secret Blogging Seminar
To piggyback on this, any good applied math or stats blogs as well? I'd personally recommend John D. Cook's blog.
[ "Question on Monoid presentations" ]
[ "math" ]
[ "8p63v4" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.83 ]
Given a set {1,2,...,n} there’s is a monoid defined by the set of all functions to itself called the transformation monoid Tn. Is there a presentation for this monoid similar like the symmetric group Sn?
I think there is a nice and pretty short presentation you can give. I half-expect to be wrong on some minor technical point, but it should be correctable. Generators are transpositions and an extra function f, which we think of as defined by f(i)=i for i<n and f(n)=n-1. The relations are: (ij) = id (ij)(kl)=(kl)(ij) for k,l disjoint from i,j ((ij)(kl)) = id if i,j is not disjoint from k,l f commutes with transpositions disjoint from n, f = f, f*(n-1,n) = f, f(n,n-1)=f Why does it work? The bit without f gives a presentation of S_n (it's pretty standard). In any faithful representation of this as a monoid of functions, the "transposition" generators also go to transpositions on 1,...,n up to a cyclic relabeling (among the elts of order 2 in the group they can be identified by the sizes of the intersections of their centralizers with other conjugacy classes; of course they must go to bijections since their squares are the identity). f is idempotent so it's the identity on its image. It commutes with permutations on 1,...,n-1, so it must contain 1,...,n-1 in its image*. The relation f(n-1,n)=f tells us that it acts the same on those two elements, so f(n)=f(n-1). * If f contains i in its image but not j, f(i)=i and (i,j)f(i)=j=f((i,j)i)=f(j) by the commutation relation. If it contains none of the elements 1,...,n-1 in its image, it must have an image equal to just n, so it is constant and the monoid we get is smaller than it would be otherwise; hence this presentation is not faithful. Question: is there a representation of size logarithmic in n?
I have a question: Consider the map 1->x,2->y,...,n->z as the string xy...z and concatenation as composition. Is there a function with takes 2 of these strings and returns another string which is the composition of those two maps as maps as a string?
I don't understand the question (bear with me, I haven't had my coffee). You are asking for a certain type of function on maps (which are themselves functions). Are you asking to find such functions in a representation of a particular given monoid, or maybe trying to produce a new one? Can you give a small example of what you mean?
Here’s an example, the transformation monoid is from {1,2,3} to itself and has 27 elements. Generators for it as strings would be 213,113, and 312. What would a function, that takes in these maps such as those generators, and returns another string, that when considered a map is those generators composition: F(213,113) = Composition of 1->2,2->1,3->3 and 1->1,2->1,3->3 as a string xyz where x,y,z are in {1,2,3}.
I'm still not sure what you're asking. F(s1,s2) should be a string which at the i-th place has what s1 has at the index j such that s2 has the value j in its i-th place, so something like: F(s1,s2)[i] = s1[s2[i]]. This is just a description of the composition function. Is it what you meant?
[ "Could there be a set of numbers between real and complex? (elaborated)" ]
[ "math" ]
[ "8p5r1g" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.52 ]
Real numbers can only be positive or negative, but abstractly couldn't there be a set between complex and real numbers. Couldn't there be some abstract neutral numbers sort of like imaginary numbers but without the sqrt(-1) aspect. They may not have any physical meaning like imaginary numbers but I don't see why you can't include them.
Well what do you want to do with your numbers? If you just want to add and multiply you can add as many "new" number systems as you want. You just take a bunch of copies of R and add an multiply them componentwise. But what if we want multiplication more like in the complex numbers? Well let's define our new number system the exact same way but without i^2 =-1. Then when I multiply (a+bi)(c+di) I get ac +(ad+bc)i +bdi^2 . And I could multiply that in similar ways. But if you think about it long enough what will you find? It's exactly the set of polynomials in one variable. This is an important way to think about it because if you make the stipulation i^2 =1 (in fancy terms taking the quotient R[i]/(i^2 +1)) you get the complex numbers. This second system is a lot more important than the first, but is it as good as the complex numbers? Not if you want to be able to divide. The complex numbers have multiplicative inverses for all but 0, but polynomials can't if they're degree is greater than 0. A good question to ask is can I come up with an extension that allows division, and the answer is basically no. The complex numbers are as good as it gets.
Just to add on to /u/DamnShadowBans : If this operator also satisfies °°x=°x (neutralizing a neutral number does nothing), then all your numbers are of the form x + °y, with (a+°b)(c+°d)=ac+°(bc+ad+bd). ° is functioning like a new number, j, such that j = j. In this case we have (1-2j) = 1, and you've introduced a version of the split-complex numbers . This was inevitable, in a sense: if you want to introduce a '2d' number system over the reals, then there are only three: one that adds a solution to x =-1, one that adds a new solution to x =0, and one that adds a new solution to x =1. All other apparent choices, if there are only two generators, can be reduced to these three by relabelling.
In order for such a set of numbers to be interesting, it is generally assumed that you're asking if there is a field that lies between R and C. In this case, I think the answer is no. You could see this from the fact that C is a degree 2 field extension of R, meaning that any intermediate extension must be either R itself or C itself. Also I just finished our school's course on Galois Theory so if someone could correct me please do so!!
It seems rational to me to say °1- °4= ° (1-4)= °3, and if you make a choice. And if you also choose it so °a°b= °(ab) you will basically have arrived at what I've described in the second paragraph, polynomials in °.
There can't be a field(something that has all the properties of the rationals) that contains the reals, is a subset of the complex numbers, and is neither. You can definitely have some set between them. It just wouldn't be a very good number system, in the sense that you couldn't use any of the very nice properties of the reals or the complex plane when doing math with them.
[ "\"What do we want a foundation to do? Comparing set-theoretic, category-theoretic, and univalent approaches\", by Penelope Maddy [PDF]" ]
[ "math" ]
[ "8p718b" ]
[ 44 ]
[ "" ]
[ true ]
[ false ]
[ 0.94 ]
null
So, several things we may ask of a mathematical foundation are She considers three foundational systems: Set theory satisfies risk assessment, generous arena, shared standard, and meta-mathematical corral criteria, but not essential guidance, which has a fundamental tension with generous arena and shared standard. Category theory offers essential guidance, but not risk assessment, generous arena (because categories are only useful for algebra, not analysis?), shared standard, or meta-mathematical corral. Also category theoretic objections to set theory are invalid, and also category can't be a foundation, because unlimited category theory is inconsistent. Homotopy type theory's ability to do generous arena and shared standard is called into question, but it offers something new which neither set theory nor category theory do: the potential for automated proof checking (as yet unrealized).
i'm not a fan of it either; i prefer to use LaTeX where possible, and LibreOffice otherwise. But it's Maddy's choice, for whatever reason, to use Word, and in this instance, i'm more interested in the content of her work than in the software she used to create it. *shrug*
I have not read the 2014 Ernst paper . I guess I will read it now. Is this going to be the Russell's paradox of category theory?
Formal set theory got a fix, when it suffered this defect. Martin-Löf type theory got a fix. I hope there will be a fix for unlimited category theory too. I wonder what the state of the art is for this.
what's with the monospace font
[ "Is there a mathematics discord server for group study?" ]
[ "math" ]
[ "8p5bsd" ]
[ 16 ]
[ "" ]
[ true ]
[ false ]
[ 0.83 ]
I just finished my freshman year of college and was wanting to get a head start on things like vector calculus and differential equations (which we kinda only touched base on in Calc 2). Is there a discord server I could join full of people who want to sharpen these same kind of skills?
Discord is the worst place for math related discussions.
Every time i join a math or physics discord its people arguing and showing off their egos. Its worse than even stack exchange; if you ask simple questions you just get belittled.
Try making a study group with people from your university or people you know. Strangers online tend to just end up being rude or uncomitted
One of the issues many people have with Math SE is that whenever a response is short and sweet quite a few people will upvote it regardless of whether or not the math is well-beyond the asker. For example, suppose someone asks what the difference between a monoid and a group is. Instead of the responder giving the basic difference (groups are monoids where every element is invertible), they might say something like "Well groups are groupoids with one object and monoids are categories with one object. Its clear that not all categories are groupoids so we are trivially done". Of course, many of us fascinated by category theory will upvote even though this response isn't exactly helpful. Every time I asked a question having to do with fundamental groups back in February, one guy kept responding with a slick solution using fundamental groupoids but would stop halfway and say to read his book for the rest. I never learned about fundamental groupoids until a week ago...
Every time I asked a question having to do with fundamental groups back in February, one guy kept responding with a slick solution using fundamental groupoids but would stop halfway and say to read his book for the rest. Shameless self-promotion? Do they have rules against that?
[ "Just tried to go to mathxl and my computer started downloading files automatically and a red screen saying my computer might be infected came up?" ]
[ "math" ]
[ "8p48hh" ]
[ 0 ]
[ "Removed - see sidebar" ]
[ true ]
[ false ]
[ 0.5 ]
null
For me, it redirects to https://www.pearsonmylabandmastering.com/northamerica/mathxl/ . It does a weird thing as it loads where it opens, and then closes a second tab. But no downloads. It could be a weird thing about the site. Or you could have a virus. I don't know. But Pearson is a reputable US educational company, so it's unlikely to be hosting malware. This isn't really a math issue. You might have better troubleshooting on a computer support subreddit (if such a thing exists).
Looks like a pearson site, probably legit
Yeah I know it's supposed to be legit. The first time I typed mathxl.com into the url bar and it started doing the downloading of files and saying my computer may be infected. Then I turned off my computer, deleted the file, and tried typing mathxl in google and I went to the same website, mathxl.com, but this time I clicked the link and it did the same thing. I'm wondering if the site is compromised or if its a local problem on my computer that was somehow triggered by visiting that site
Thank you for looking into it. I do think this could be relevant to this subreddit, if it was something that would happen if other people went to visit that site to work on math since it is a website commonly used by schools/colleges then it would help protect people who might browse this subreddit and use that site to do math
I disagree.