User description

Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (guest, #24616) [Hyperlink]1. this WP article was the fifth in a series of articles following the safety of the web from its beginnings to relevant topics of as we speak. discussing the security of linux (or lack thereof) matches properly in there. it was additionally a effectively-researched article with over two months of analysis and interviews, one thing you cannot fairly declare your self in your latest items on the subject. you don't like the facts? then say so. or minecraft gallery , do something constructive about them like Kees and others have been attempting. nevertheless silly comparisons to old crap just like the Mindcraft research and fueling conspiracies do not exactly help your case. 2. "We do an affordable job of discovering and fixing bugs." let's start right here. is this statement primarily based on wishful thinking or chilly onerous information you're going to share in your response? in line with Kees, the lifetime of security bugs is measured in years. that's greater than the lifetime of many units folks buy and use and ditch in that interval. 3. "Issues, whether or not they are safety-associated or not, are patched shortly," some are, some aren't: let's not neglect the current NMI fixes that took over 2 months to trickle right down to stable kernels and we also have a consumer who has been waiting for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-techniques.btrfs/49500 (FYI, the overflow plugin is the primary one Kees is making an attempt to upstream, think about the shitstorm if bugreports shall be treated with this perspective, let's hope btrfs guys are an exception, not the rule). anyway, two examples aren't statistics, so as soon as again, do you have numbers or is all of it wishful thinking? (it's partly a trick question as a result of you may even have to explain how something gets to be decided to be safety associated which as we all know is a messy enterprise within the linux world) 4. "and the stable-update mechanism makes these patches accessible to kernel customers." except when it doesn't. and yes, i've numbers: grsec carries 200+ backported patches in our 3.14 stable tree. 5. "Specifically, the few developers who are working on this area have never made a serious attempt to get that work built-in upstream." you do not need to be shy about naming us, in any case you did so elsewhere already. and we additionally explained the the reason why we have not pursued upstreaming our code: https://lwn.net/Articles/538600/ . since i do not expect you and your readers to read any of it, here's the tl;dr: if you'd like us to spend hundreds of hours of our time to upstream our code, you will have to pay for it. no ifs no buts, that's how the world works, that is how >90% of linux code will get in too. i personally find it fairly hypocritic that well paid kernel builders are bitching about our unwillingness and inability to serve them our code on a silver platter without spending a dime. and before somebody brings up the CII, go test their mail archives, after some preliminary exploratory discussions i explicitly requested them about supporting this lengthy drawn out upstreaming work and bought no answers.Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Hyperlink]Cash (aha) quote : > I propose you spend none of your free time on this. Zero. I suggest you receives a commission to do that. And nicely. Nobody count on you to serve your code on a silver platter totally free. The Linux basis and massive companies utilizing Linux (Google, Red Hat, Oracle, Samsung, and so forth.) should pay safety specialists such as you to upstream your patchs.Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Hyperlink]I might just prefer to point out that the best way you phrased this makes your comment a tone argument[1][2]; you have (probably unintentionally) dismissed the entire father or mother's arguments by pointing at its presentation. The tone of PAXTeam's comment shows the frustration constructed up over the years with the best way things work which I think should be taken at face value, empathized with, and understood relatively than merely dismissed. 1. http://rationalwiki.org/wiki/Tone_argument 2. http://geekfeminism.wikia.com/wiki/Tone_argument Cheers,Posted Nov 7, 2015 0:55 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]why, is upstream identified for its primary civility and decency? have you even read the WP submit below dialogue, never mind past lkml site visitors?Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (visitor, #58961) [Hyperlink]No ArgumentPosted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Link]Please don't; it does not belong there both, and it especially doesn't want a cheering part because the tech press (LWN typically excepted) tends to offer.Posted Nov 8, 2015 8:36 UTC (Solar) by gmatht (guest, #58961) [Link]Ok, however I used to be thinking of Linus TorvaldsPosted Nov 8, 2015 16:Eleven UTC (Solar) by pbonzini (subscriber, #60935) [Hyperlink]Posted Nov 6, 2015 22:Forty three UTC (Fri) by PaXTeam (visitor, #24616) [Link]Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Link]Why should you assume only money will repair this drawback? Yes, I agree extra assets must be spent on fixing Linux kernel safety issues, however don't assume somebody giving a company (ahem, PAXTeam) cash is the only answer. (Not mean to impugn PAXTeam's safety efforts.)The Linux improvement neighborhood could have had the wool pulled over its collective eyes with respect to safety points (both real or perceived), but simply throwing cash at the problem will not repair this.And yes, I do realize the commercial Linux distros do heaps (most?) of the kernel improvement nowadays, and that implies indirect monetary transactions, but it is much more involved than just that.Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Hyperlink]Posted Nov 7, 2015 9:49 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]Posted Nov 6, 2015 23:Thirteen UTC (Fri) by dowdle (subscriber, #659) [Link]I think you positively agree with the gist of Jon's argument... not sufficient focus has been given to safety within the Linux kernel... the article gets that half proper... cash hasn't been going in direction of safety... and now it needs to. Aren't you glad?Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (visitor, #24616) [Link]they talked to spender, not me personally, however yes, this side of the coin is well represented by us and others who have been interviewed. the identical approach Linus is an effective consultant of, properly, his own pet challenge known as linux. > And if Jon had solely talked to you, his would have been too. given that i am the author of PaX (a part of grsec) yes, speaking to me about grsec issues makes it one of the best ways to analysis it. but if you already know of someone else, be my guest and name them, i am pretty sure the not too long ago formed kernel self-protection of us can be dying to have interaction them (or not, i do not suppose there's a sucker on the market with thousands of hours of free time on their hand). > [...]it also contained quite a number of of groan-worthy statements. nothing is perfect but contemplating the audience of the WP, that is one of the higher journalistic pieces on the topic, regardless of the way you and others do not just like the sorry state of linux security uncovered in there. if you need to debate more technical particulars, nothing stops you from talking to us ;). talking of your complaints about journalistic qualities, since a earlier LWN article noticed it match to incorporate several typical dismissive claims by Linus about the standard of unspecified grsec options with no proof of what expertise he had with the code and the way latest it was, how come we didn't see you or anyone else complaining about the standard of that article? > Aren't you glad? no, or not yet anyway. i've heard a number of empty phrases through the years and nothing ever manifested or worse, all the money has gone to the pointless train of fixing particular person bugs and related circus (that Linus rightfully despises FWIW).Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Link]Posted Nov 8, 2015 13:06 UTC (Sun) by k3ninho (subscriber, #50375) [Hyperlink]Proper now we have bought developers from massive names saying that doing all that the Linux ecosystem does *safely* is an itch that they've. Sadly, the encircling cultural perspective of developers is to hit practical objectives, and occasionally performance targets. Safety goals are sometimes ignored. Ideally, the tradition would shift so that we make it difficult to follow insecure habits, patterns or paradigms -- that could be a process that can take a sustained effort, not merely the upstreaming of patches. Regardless of the culture, these patches will go upstream finally anyway because the concepts that they embody at the moment are well timed. I can see a approach to make it occur: Linus will accept them when a big end-consumer (say, Intel, Google, Facebook or Amazon) delivers stuff with notes like 'here's a set of enhancements, we're already using them to resolve this type of drawback, here is how all the pieces will stay working as a result of $evidence, notice carefully that you're staring down the barrels of a fork as a result of your tree is now evolutionarily disadvantaged'. It's a game and may be gamed; I might choose that the group shepherds users to observe the pattern of declaring problem + answer + functional check evidence + efficiency test proof + safety test proof. K3n.Posted Nov 9, 2015 6:49 UTC (Mon) by jospoortvliet (guest, #33164) [Link]And about that fork barrel: I'd argue it's the opposite means round. Google forked and misplaced already.Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (guest, #99377) [Hyperlink]Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (visitor, #33164) [Link]Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Hyperlink]So I need to confess to a specific amount of confusion. I might swear that the article I wrote mentioned exactly that, but you have put a fair amount of effort into flaming it...?Posted Nov 8, 2015 1:34 UTC (Sun) by PaXTeam (guest, #24616) [Hyperlink]Posted Nov 6, 2015 22:Fifty two UTC (Fri) by flussence (subscriber, #85566) [Link]I personally assume you and Nick Krause share reverse sides of the same coin. Programming skill and basic civility.Posted Nov 6, 2015 22:Fifty nine UTC (Fri) by dowdle (subscriber, #659) [Link]Posted Nov 7, 2015 0:16 UTC (Sat) by rahvin (visitor, #16953) [Link]I hope I am incorrect, however a hostile attitude isn't going to help anybody get paid. It is a time like this where something you seem to be an "professional" at and there's a demand for that expertise where you display cooperation and willingness to participate as a result of it is an opportunity. I am relatively shocked that somebody does not get that, but I'm older and have seen just a few of those opportunities in my profession and exploited the hell out of them. You only get just a few of these in the typical career, and handful at probably the most. Typically it's important to put money into proving your abilities, and this is a type of moments. It appears the Kernel neighborhood may finally take this safety lesson to coronary heart and embrace it, as stated within the article as a "mindcraft second". This is an opportunity for builders that may need to work on Linux security. Some will exploit the chance and others will thumb their noses at it. In the long run these developers that exploit the opportunity will prosper from it. I really feel previous even having to write down that.Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]Perhaps there's a rooster and egg drawback right here, but when in search of out and funding individuals to get code upstream, it helps to pick out individuals and groups with a historical past of with the ability to get code upstream. It's perfectly affordable to prefer figuring out of tree, providing the ability to develop impressive and important security advances unconstrained by upstream necessities. That's work somebody may additionally want to fund, if that meets their wants.Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (guest, #24616) [Link]Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Link]You make this argument (implying you do analysis and Josh would not) after which fail to support it by any cite. It would be far more convincing for those who surrender on the Onus probandi rhetorical fallacy and actually cite information. > case in point, it was *them* who suggested that they wouldn't fund out-of-tree work however would consider funding upstreaming work, except when pressed for the main points, all i bought was silence. For these following alongside at residence, this is the related set of threads: http://lists.coreinfrastructure.org/pipermail/cii-talk about... A fast precis is that they instructed you your undertaking was unhealthy because the code was by no means going upstream. You told them it was because of kernel developers angle so they should fund you anyway. They told you to submit a grant proposal, you whined more in regards to the kernel attitudes and finally even your apologist advised you that submitting a proposal is perhaps the neatest thing to do. At that point you went silent, not vice versa as you suggest above. > obviously i will not spend time to write down up a begging proposal simply to be instructed that 'no sorry, we don't fund multi-year tasks in any respect'. that is one thing that one must be advised upfront (or heck, be part of some public guidelines in order that others will know the principles too). You seem to have a fatally flawed grasp of how public funding works. If you do not inform folks why you want the money and the way you may spend it, they're unlikely to disburse. Saying I am sensible and I know the issue now hand over the money does not even work for most Academics who have a strong repute in the sphere; which is why most of them spend >30% of their time writing grant proposals. > as for getting code upstream, how about you check the kernel git logs (minus the stuff that was not correctly credited)? jejb@jarvis> git log|grep -i 'Author: pax.*team'|wc -l 1 Stellar, I must say. And earlier than you gentle off on those who've misappropriated your credit score, please keep in mind that getting code upstream on behalf of reluctant or incapable actors is a hugely valuable and time consuming skill and one of the reasons teams like Linaro exist and are nicely funded. If extra of your stuff does go upstream, it will likely be due to the not inconsiderable efforts of other individuals on this area. You now have a enterprise model promoting non-upstream safety patches to customers. There's nothing wrong with that, it is a fairly regular first stage enterprise mannequin, but it does moderately rely upon patches not being upstream in the first place, calling into query the earnestness of your try to put them there. Now this is some free advice in my field, which is assisting firms align their companies in open source: The promoting out of tree patch route is at all times an eventual failure, notably with the kernel, because if the performance is that useful, it gets upstreamed or reinvented in your despite, leaving you with nothing to sell. If your business plan B is promoting expertise, you may have to bear in mind that it is going to be a hard sell when you've got no out of tree differentiator left and git history denies that you had anything to do with the in-tree patches. In truth "loopy safety person" will change into a self fulfilling prophecy. The recommendation? it was apparent to everybody else who read this, but for you, it is do the upstreaming yourself earlier than it gets carried out for you. That method you might have a official historical claim to Plan B and you might even have a Plan A selling a rollup of upstream track patches integrated and delivered earlier than the distributions get round to it. Even your software to the CII could not be dismissed as a result of your work wasn't going anywhere. Your various is to proceed taking part in the role of Cassandra and doubtless suffer her eventual destiny.Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]> Second, for the doubtlessly viable items this could be a multi-12 months > full time job. Is the CII prepared to fund tasks at that degree? If not > all of us would end up with a number of unfinished and partially broken features. please show me the reply to that query. with no definitive 'yes' there is no such thing as a level in submitting a proposal as a result of this is the time frame that in my view the job will take and any proposal with that requirement could be shot down instantly and be a waste of my time. and i stand by my declare that such simple fundamental necessities should be public information. > Stellar, I need to say. "Lies, damned lies, and statistics". you notice there's a couple of option to get code into the kernel? how about you employ your git-fu to seek out all of the bugreports/suggested fixes that went in on account of us? as for particularly me, Greg explicitly banned me from future contributions via af45f32d25cc1 so it is no wonder i don't ship patches straight in (and that one commit you found that went in despite said ban is definitely a very dangerous instance because it is also the one which Linus censored for no good cause and made me determine to by no means send security fixes upstream until that apply adjustments). > You now have a business mannequin promoting non-upstream security patches to customers. now? we have had paid sponsorship for our varied stable kernel sequence for 7 years. i wouldn't name it a enterprise model though because it hasn't paid anybody's payments. > [...]calling into query the earnestness of your try to put them there. i should be lacking one thing here however what attempt? i've never in minecraft gallery tried to submit PaX upstream (for all the reasons mentioned already). the CII mails were exploratory to see how critical that whole organization is about truly securing core infrastructure. in a way i've received my answers, there's nothing more to the story. as in your free advice, let me reciprocate: complex issues do not clear up themselves. code solving complicated problems doesn't write itself. individuals writing code solving complicated issues are few and much between that one can find out in short order. such individuals (domain consultants) do not work totally free with few exceptions like ourselves. biting the hand that feeds you will solely end you up in hunger. PS: since you are so sure about kernel builders' capability to reimplement our code, perhaps take a look at what parallel features i nonetheless maintain in PaX regardless of vanilla having a 'totally-not-reinvented-right here' implementation and try to know the rationale. or just take a look at all the CVEs that affected say vanilla's ASLR but did not have an effect on mine. PPS: Cassandra never wrote code, i do. criticizing the sorry state of kernel security is a side mission when i am bored or simply ready for the subsequent kernel to compile (i want LTO was extra efficient).Posted Nov 8, 2015 2:28 UTC (Solar) by jejb (subscriber, #6654) [Hyperlink]In other phrases, you tried to outline their course of for them ... I can't suppose why that would not work. > "Lies, damned lies, and statistics". The problem with advert hominem assaults is that they're singularly ineffective in opposition to a transparently factual argument. I posted a one line command anybody may run to get the number of patches you have authored in the kernel. Why do not you put up an equal that gives figures you want extra? > i've never in my life tried to submit PaX upstream (for all the explanations mentioned already). So the master plan is to exhibit your experience by the number of patches you have not submitted? great plan, world domination beckons, sorry that one got away from you, however I am positive you will not let it occur again.Posted Nov 8, 2015 2:Fifty six UTC (Solar) by PaXTeam (guest, #24616) [Link]what? since when does asking a query outline anything? isn't that how we find out what another person thinks? is not that what *they* have that webform (by no means mind the mailing lists) for as well? in different phrases you admit that my query was not actually answered . > The issue with advert hominem assaults is that they are singularly ineffective in opposition to a transparently factual argument. you didn't have an argument to start with, that is what i explained in the part you fastidiously selected to not quote. i'm not here to defend myself in opposition to your clearly idiotic makes an attempt at proving no matter you are making an attempt to show, as they are saying even in kernel circles, code speaks, bullshit walks. you possibly can have a look at mine and determine what i can or can't do (not that you have the data to grasp most of it, thoughts you). that said, there're clearly different more succesful people who have carried out so and decided that my/our work was price something else nobody would have been feeding off of it for the past 15 years and nonetheless counting. and as unimaginable as it might appear to you, life doesn't revolve around the vanilla kernel, not everyone's dying to get their code in there especially when it means to put up with such silly hostility on lkml that you now also demonstrated right here (it's ironic how you got here to the protection of josh who particularly asked individuals not to deliver that notorious lkml model right here. good job there James.). as for world domination, there're some ways to achieve it and something tells me that you're clearly out of your league right here since PaX has already achieved that. you are working such code that implements PaX features as we speak.Posted Nov 8, 2015 16:52 UTC (Solar) by jejb (subscriber, #6654) [Link]I posted the one line git script giving your authored patches in response to this authentic request by you (this one, just in case you've forgotten http://lwn.web/Articles/663591/): > as for getting code upstream, how about you test the kernel git logs (minus the stuff that was not properly credited)? I take it, by the way in which you have shifted ground in the earlier threads, that you simply wish to withdraw that request?Posted Nov 8, 2015 19:31 UTC (Solar) by PaXTeam (visitor, #24616) [Link]Posted Nov 8, 2015 22:31 UTC (Solar) by pizza (subscriber, #46) [Link]Please provide one that is not mistaken, or much less fallacious. It should take less time than you have already wasted here.Posted Nov 8, 2015 22:49 UTC (Solar) by PaXTeam (visitor, #24616) [Hyperlink]anyway, since it is you guys who have a bee in your bonnet, let's check your level of intelligence too. first determine my e mail handle and challenge identify then strive to seek out the commits that say they come from there (it brought back some reminiscences from 2004 already, how occasions flies! i am surprised i really managed to perform this a lot with explicitly not making an attempt, imagine if i did :). it's an incredibly complicated process so by conducting it you may show your self to be the top canine here on lwn, no matter that is price ;).Posted Nov 8, 2015 23:25 UTC (Solar) by pizza (subscriber, #46) [Link]*shrug* Or don't; you are only sullying your own popularity.Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (guest, #33164) [Hyperlink]Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Hyperlink]I would not bothPosted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Link]Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (visitor, #62367) [Hyperlink]Posted Nov 8, 2015 3:38 UTC (Sun) by PaXTeam (visitor, #24616) [Link]Posted Nov 12, 2015 13:Forty seven UTC (Thu) by nix (subscriber, #2304) [Link]Ah. I assumed my memory wasn't failing me. Compare to PaXTeam's response to . PaXTeam isn't averse to outright mendacity if it means he will get to look right, I see. Possibly PaXTeam's memory is failing, and this obvious contradiction just isn't a brazen lie, however on condition that the two posts had been made inside a day of one another I doubt it. (PaXTeam's complete unwillingness to assume good religion in others deserves some reflection. Yes, I *do* think he is mendacity by implication here, and doing so when there's nearly nothing at stake. God alone knows what he is prepared to stoop to when one thing *is* at stake. Gosh I ponder why his fixes aren't going upstream very quick.)Posted Nov 12, 2015 14:Eleven UTC (Thu) by PaXTeam (visitor, #24616) [Link]> and that one commit you discovered that went in regardless of mentioned ban also someone's ban doesn't suggest it's going to translate into someone else's execution of that ban as it's clear from the commit in question. it is somewhat unhappy that it takes a safety fix to expose the fallacy of this policy although. the rest of your pithy ad hominem speaks for itself higher than i ever might ;).Posted Nov 12, 2015 15:58 UTC (Thu) by andreashappe (subscriber, #4810) [Hyperlink]Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (visitor, #67268) [Link]I don't see this message in my mailbox, so presumably it got swallowed.Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]You might be aware that it's completely potential that everyone seems to be fallacious right here , proper? That the kernel maintainers need to focus more on safety, that the article was biased, that you are irresponsible to decry the state of safety, and do nothing to help, and that your patchsets would not help that much and are the flawed path for the kernel? That just because the kernel maintainers aren't 100% proper it doesn't mean you might be?Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (guest, #5770) [Link]I feel you will have him backwards there. Jon is comparing this to Mindcraft as a result of he thinks that despite being unpalatable to numerous the group, the article would possibly in truth include a variety of truth.Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Hyperlink]Posted Nov 9, 2015 15:13 UTC (Mon) by spender (guest, #23067) [Link]"There are rumors of darkish forces that drove the article within the hopes of taking Linux down a notch. All of this might effectively be true" Just as you criticized the article for mentioning Ashley Madison though within the very first sentence of the following paragraph it mentions it did not involve the Linux kernel, you cannot give credence to conspiracy theories without incurring the same criticism (in different phrases, you can't play the Glenn Beck "I'm just asking the questions right here!" whose "questions" fuel the conspiracy theories of others). Very like mentioning Ashley Madison as an example for non-technical readers about the prevalence of Linux on this planet, if you're criticizing the mention then mustn't likening a non-FUD article to a FUD article also deserve criticism, especially given the rosy, self-congratulatory picture you painted of upstream Linux security? As the PaX Team identified within the preliminary submit, the motivations aren't exhausting to know -- you made no mention in any respect about it being the 5th in a protracted-running series following a reasonably predictable time trajectory. No, we did not miss the overall analogy you were attempting to make, we simply don't think you may have your cake and eat it too. -BradPosted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Hyperlink]Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Link]It is gracious of you to not blame your readers. I determine they're a good target: there's that line about these ignorant of historical past being condemned to re-implement Unix -- as your readers are! :-) K3n.Posted Nov 9, 2015 18:43 UTC (Mon) by bojan (subscriber, #14302) [Link]Sadly, I do not understand neither the "security" of us (PaXTeam/spender), nor the mainstream kernel of us by way of their attitude. I confess I have totally no technical capabilities on any of those subjects, but if they all decided to work collectively, as an alternative of getting infinite and pointless flame wars and blame sport exchanges, quite a lot of the stuff would have been done already. And all of the whereas everybody concerned may have made another big pile of money on the stuff. All of them seem to need to have a greater Linux kernel, so I've obtained no thought what the issue is. Evidently no person is keen to yield any of their positions even a bit bit. As a substitute, both sides appear to be bent on attempting to insult their means into forcing the other side to give up. Which, in fact, by no means works - it simply causes more pushback. Perplexing stuff...Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Hyperlink]Posted Nov 9, 2015 19:Forty four UTC (Mon) by bojan (subscriber, #14302) [Link]Take a scientific computational cluster with an "air gap", as an illustration. You'd probably want most of the safety stuff turned off on it to achieve most efficiency, as a result of you may trust all customers. Now take just a few billion cell phones which may be difficult or sluggish to patch. You'd in all probability need to kill many of the exploit courses there, if those gadgets can still run moderately properly with most safety features turned on. So, it isn't either/or. It's most likely "it depends". However, if the stuff isn't there for everyone to compile/use in the vanilla kernel, will probably be more difficult to make it a part of everyday choices for distributors and users.Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Hyperlink]How sad. This Dijkstra quote comes to mind immediately: Software program engineering, of course, presents itself as one other worthy cause, however that's eyewash: if you happen to rigorously learn its literature and analyse what its devotees truly do, you'll discover that software engineering has accepted as its charter "How to program if you can not."Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Link]I suppose that fact was too unpleasant to fit into Dijkstra's world view.Posted Nov 7, 2015 10:Fifty two UTC (Sat) by ms (subscriber, #41272) [Link]Certainly. And the interesting factor to me is that after I reach that point, tests will not be ample - mannequin checking at a minimum and actually proofs are the only approach forwards. I'm no safety knowledgeable, my discipline is all distributed programs. I understand and have applied Paxos and i believe I can clarify how and why it really works to anyone. But I am presently performing some algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No test is sufficient because there are infinite interleavings of events and my head just could not cope with engaged on this either at the computer or on paper - I found I couldn't intuitively cause about this stuff in any respect. So I started defining the properties and needed and step-by-step proving why each of them holds. With out my notes and proofs I am unable to even explain to myself, not to mention anybody else, why this factor works. I discover this each fully obvious that this will occur and totally terrifying - the upkeep cost of these algorithms is now an order of magnitude increased.Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Hyperlink]> Indeed. And the interesting factor to me is that after I reach that point, checks aren't adequate - model checking at a minimal and really proofs are the only method forwards. Or are you simply utilizing the unsuitable maths? Hobbyhorse time again :-) however to quote a fellow Pick developer ... "I typically walk into a SQL growth store and see that wall - you already know, the one with the massive SQL schema that no-one absolutely understands on it - and marvel how I can easily hold the complete schema for a Choose database of the identical or greater complexity in my head". But it is easy - by schooling I am a Chemist, by curiosity a Bodily Chemist (and by career an unemployed programmer :-). And when I'm enthusiastic about chemistry, I can ask myself "what's an atom product of" and suppose about issues just like the robust nuclear power. Next stage up, how do atoms stick collectively and make molecules, and think in regards to the electroweak drive and electron orbitals, and the way do chemical reactions happen. Then I believe about molecules stick collectively to make materials, and assume about metals, and/or Van de Waals, and stuff. Point is, it is advisable *layer* stuff, and take a look at issues, and say "how can I break up elements off into 'black bins' so at anyone level I can assume the opposite levels 'just work'". For example, with Decide a FILE (table to you) shops a class - a collection of similar objects. One object per File (row). And, similar as relational, one attribute per Area (column). Can you map your relational tables to actuality so easily? :-) Going back THIRTY years, I remember a story about a guy who built little pc crabs, that might quite fortunately scuttle around within the surf zone. As a result of he didn't attempt to work out how to unravel all the problems without delay - each of his (incredibly puny by at the moment's requirements - this is the 8080/Z80 era!) processors was set to only course of slightly little bit of the problem and there was no central "brain". But it labored ... Possibly you must just write a bunch of small modules to unravel every individual downside, and let final answer "just happen". Cheers, WolPosted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (visitor, #60862) [Link]To my understanding, this is exactly what a mathematical abstraction does. For instance in Z notation we would construct schemas for the assorted modifying ("delta") operations on the base schema, after which argue about preservation of formal invariants, properties of the result, and transitivity of the operation when chained with itself, or the previous aggregate schema composed of schemas A by O (for which they've been already argued). The end result is a set of operations that, executed in arbitrary order, end in a set of properties holding for the result and outputs. Thus proving the formal design correct (w/ caveat lectors regarding scope, correspondence with its implementation [although that can be proven as properly], and skim-only ["xi"] operations).Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Hyperlink]Trying by way of the history of computing (and possibly plenty of different fields too), you'll most likely discover that folks "cannot see the wooden for the timber" extra often that not. They dive into the element and completely miss the large image. (Medication, and interest of mine, suffers from that too - I remember someone speaking about the advisor wanting to amputate a gangrenous leg to save lots of someone's life - oblivious to the truth that the affected person was dying of cancer.) Cheers, WolPosted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Link]https://www.youtube.com/watch?v=VpuVDfSXs-g (LCA 2015 - "Programming Considered Dangerous") FWIW, I think that this speak could be very related to why writing secure software is so arduous.. -Dave.Posted Nov 7, 2015 5:Forty nine UTC (Sat) by kunitz (subscriber, #3965) [Link]While we are spending tens of millions at a multitude of safety problems, kernel issues are not on our top-precedence listing. Actually I remember only as soon as having discussing a kernel vulnerability. The result of the analysis has been that each one our techniques have been working kernels that have been older because the kernel that had the vulnerability. However "patch administration" is an actual problem for us. Software program must continue to work if we set up safety patches or replace to new releases because of the end-of-life policy of a vendor. The income of the company is depending on the IT methods operating. So "not breaking consumer space" is a safety feature for us, because a breakage of 1 element of our a number of ten thousands of Linux programs will stop the roll-out of the safety update. Another problem is embedded software program or firmware. These days almost all hardware methods embody an operating system, typically some Linux version, providing a fill community stack embedded to assist distant administration. Commonly these systems don't survive our obligatory security scan, as a result of vendors still didn't update the embedded openssl. The real challenge is to provide a software program stack that may be operated in the hostile atmosphere of the Internet maintaining full system integrity for ten years and even longer without any buyer maintenance. The present state of software program engineering will require assist for an automatic update course of, but vendors should perceive that their enterprise mannequin should be capable to finance the sources providing the updates. Total I'm optimistic, networked software program is just not the primary know-how used by mankind causing issues that had been addressed later. Steam engine use may lead to boiler explosions however the "engineers" had been able to reduce this threat significantly over a few a long time.Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Hyperlink]The next is all guess work; I'd be keen to know if others have proof both one way or another on this: The individuals who learn to hack into these techniques by way of kernel vulnerabilities know that they abilities they've learnt have a market. Thus they don't tend to hack to be able to wreak havoc - indeed on the entire where knowledge has been stolen with the intention to launch and embarrass people, it _seems_ as if these hacks are by a lot simpler vectors. I.e. lesser expert hackers discover there's a whole load of low-hanging fruit which they'll get at. They're not being paid ahead of time for the info, so they turn to extortion as an alternative. They don't cover their tracks, and they'll often be discovered and charged with criminal offences. So if your security meets a sure primary degree of proficiency and/or your company is not doing anything that places it close to the top of "firms we'd like to embarrass" (I suspect the latter is much more effective at holding techniques "safe" than the former), then the hackers that get into your system are likely to be expert, paid, and possibly not going to do a lot harm - they're stealing data for a competitor / state. So that does not hassle your bottom line - at the very least not in a means which your shareholders will remember of. So why fund security?Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (visitor, #82661) [Link]Then again, some effective mitigation in kernel stage could be very helpful to crush cybercriminal/skiddie's try. If one of your customer operating a future buying and selling platform exposes some open API to their purchasers, and if the server has some reminiscence corruption bugs will be exploited remotely. Then you recognize there are recognized attack strategies( corresponding to offset2lib) might help the attacker make the weaponized exploit a lot easier. Will you explain the failosophy "A bug is bug" to your customer and inform them it might be okay? Btw, offset2lib is ineffective to PaX/Grsecurity's ASLR imp. To essentially the most commercial uses, more security mitigation throughout the software program will not price you extra finances. You will nonetheless have to do the regression test for every improve.Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Link]Needless to say I focus on exterior internet-primarily based penetration-tests and that in-home assessments (local LAN) will probably yield totally different results.Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Link]I keep studying this headline as "a new Minecraft second", and thinking that maybe they've determined to follow up the .Web factor by open-sourcing Minecraft. Oh nicely. I mean, safety is nice too, I suppose.Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_every (subscriber, #28989) [Hyperlink]Posted Nov 8, 2015 10:34 UTC (Sun) by jcm (subscriber, #18262) [Link]Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]Posted Nov 9, 2015 15:53 UTC (Mon) by neiljerram (subscriber, #12005) [Link](Oh, and I was additionally still questioning how Minecraft had taught us about Linux efficiency - so thanks to the opposite comment thread that pointed out the 'd', not 'e'.)Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (visitor, #4654) [Hyperlink]I'd just like to add that in my view, there is a basic drawback with the economics of computer safety, which is very visible at the moment. Two issues even perhaps. First, the money spent on laptop safety is often diverted in direction of the so-referred to as security "circus": quick, straightforward solutions which are primarily chosen just so as to "do one thing" and get better press. It took me a very long time - possibly many years - to assert that no security mechanism in any respect is healthier than a foul mechanism. But now I firmly believe in this perspective and would relatively take the danger knowingly (supplied that I can save cash/useful resource for myself) than take a nasty method at fixing it (and have no cash/resource left after i understand I should have done one thing else). And that i find there are numerous bad or incomplete approaches presently obtainable in the computer safety area. These spilling our uncommon money/assets on ready-made ineffective instruments should get the bad press they deserve. And, we actually need to enlighten the press on that because it is not so easy to understand the effectivity of safety mechanisms (which, by definition, ought to prevent issues from happening). Second, and which may be more recent and more worrying. The flow of money/resource is oriented in the path of assault tools and vulnerabilities discovery a lot greater than in the course of new protection mechanisms. This is very worrying as cyber "protection" initiatives look increasingly like the usual idustrial initiatives aimed at producing weapons or intelligence systems. Furthermore, unhealthy useless weapons, because they're solely working in opposition to our very susceptible present systems; and dangerous intelligence methods as even fundamental faculty-level encryption scares them all the way down to useless. However, all of the ressources are for these adult teenagers taking part in the white hat hackers with not-so-difficult programming tips or community monitoring or WWI-degree cryptanalysis. And now additionally for the cyberwarriors and cyberspies which have but to show their usefulness solely (especially for peace protection...). Personnally, I might happily depart them all the hype; however I will forcefully declare that they haven't any right in any way on any of the price range allocation selections. Solely those working on protection ought to. And yep, it means we should determine where to put there sources. We now have to assert the unique lock for ourselves this time. (and I guess the PaXteam may very well be amongst the primary to benefit from such a change). While eager about it, I would not even depart white-hat or cyber-guys any hype in the long run. That is extra publicity than they deserve. I crave for the day I'll read within the newspaper that: "One other of these ill advised debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well-known virus program code exploiting a programmer mistake and managed nevertheless to deliver a type of unfinished and unhealthy quality programs, X, that we are all obliged to use to its knees, annoying tens of millions of normal users with his unlucky cyber-vandalism. All the protection experts unanimously recommend that, as soon as once more, the funds of the cyber-command be retargetted, or not less than leveled-off, so as to convey more safety engineer positions in the educational domain or civilian industry. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional on this affair."Hmmm - cyber-hooligans - I like the label. Though it doesn't apply nicely to the battlefield-oriented variant.Posted Nov 9, 2015 14:28 UTC (Mon) by drag (guest, #31333) [Link]The state of 'software program security business' is a f-ng catastrophe. Failure of the very best order. There is massive amounts of money that goes into 'cyber security', however it's normally spent on government compliance and audit efforts. This means as an alternative of really placing effort into correcting points and mitigating future problems, the vast majority of the hassle goes into taking present functions and making them conform to committee-driven pointers with the minimal amount of effort and adjustments. Some degree of regulation and standardization is absolutely wanted, however lay individuals are clueless and are fully unable to discern the distinction between somebody who has valuable expertise versus some company that has spent thousands and thousands on slick marketing and 'native advertising' on giant web sites and computer magazines. The folks with the money unfortunately only have their own judgment to rely on when buying into 'cyber security'. > Those spilling our uncommon cash/sources on ready-made useless tools should get the unhealthy press they deserve. There is no such thing as 'our rare money/resources'. You have got your money, I have mine. Money being spent by some company like Redhat is their money. Cash being spent by governments is the government's money. (you, actually, have far more control in how Walmart spends it's cash then over what your authorities does with their's) > This is particularly worrying as cyber "protection" initiatives look an increasing number of like the same old idustrial tasks geared toward producing weapons or intelligence programs. Moreover, dangerous ineffective weapons, as a result of they're solely working against our very weak present programs; and unhealthy intelligence systems as even basic college-degree encryption scares them down to ineffective. Having secure software with strong encryption mechanisms in the hands of the public runs counter to the interests of most main governments. Governments, like any other for-profit group, are primarily concerned about self-preservation. Money spent on drone initiatives or banking auditing/oversight regulation compliance is Far more helpful to them then making an attempt to help the public have a secure mechanism for making telephone calls. Especially when these secure mechanisms interfere with knowledge collection efforts. Sadly you/I/us cannot rely on some magical benefactor with deep pockets to sweep in and make Linux better. It's simply not going to happen. Corporations like Redhat have been massively useful to spending assets to make Linux kernel more succesful.. nonetheless they are driven by a the need to turn a profit, which suggests they need to cater on to the the form of requirements established by their buyer base. Customers for EL tend to be way more centered on decreasing costs related to administration and software growth then security at the low-level OS. Enterprise Linux customers tend to depend on physical, human coverage, and community security to protect their 'smooth' interiors from being uncovered to external threats.. assuming (rightly) that there's little or no they'll do to really harden their techniques. In actual fact when the selection comes between security vs comfort I'm certain that almost all prospects will fortunately defeat or strip out any safety mechanisms introduced into Linux. On top of that when most Enterprise software is extremely unhealthy. A lot so that 10 hours spent on bettering an online front-end will yield more actual-world security advantages then a one thousand hours spent on Linux kernel bugs for most companies. Even for 'regular' Linux users a security bug in their Firefox's NAPI flash plugin is far more devastating and poses a massively larger threat then a obscure Linux kernel buffer over movement drawback. It is simply not really essential for attackers to get 'root' to get access to the important info... generally all of which is contained in a single person account. Ultimately it's up to individuals such as you and myself to put the trouble and cash into improving Linux safety. For each ourselves and other individuals.Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (visitor, #4654) [Hyperlink]Spilling has always been the case, however now, to me and in pc security, most of the money seems spilled as a result of dangerous faith. And this is usually your money or mine: both tax-fueled governemental sources or company prices which can be directly reimputed on the costs of products/software program we are advised we're *obliged* to buy. (Have a look at corporate firewalls, house alarms or antivirus software program advertising and marketing discourse.) I think it's time to point out that there are a number of "malicious malefactors" around and that there's an actual need to identify and sanction them and confiscate the sources they have by some means managed to monopolize. And i do *not* think Linus is among such culprits by the way. But I feel he may be amongst the ones hiding their heads within the sand in regards to the aforementioned evil actors, while he probably has extra leverage to counteract them or oblige them to reveal themselves than many of us. I find that to be of brown-paper-bag stage (though head-in-the-sand is in some way a brand new interpretation). In the long run, I think you're right to say that at the moment it is only as much as us people to strive honestly to do something to improve Linux or computer safety. However I still assume that I am proper to say that this is not normal; particularly whereas some very severe individuals get very severe salaries to distribute randomly some tough to evaluate budgets. [1] A paradoxical scenario once you give it some thought: in a website the place you might be before everything preoccupied by malicious individuals everyone ought to have factual, clear and sincere behavior as the primary priority in their thoughts.Posted Nov 9, 2015 15:47 UTC (Mon) by MarcB (subscriber, #101804) [Hyperlink]It even has a pleasant, seven line Fundamental-pseudo-code that describes the current scenario and clearly exhibits that we are caught in an countless loop. It doesn't reply the big query, although: How to write higher software. The sad factor is, that that is from 2005 and all the issues that have been obviously silly concepts 10 years in the past have proliferated even more.Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (visitor, #4654) [Hyperlink]Be aware IMHO, we should always examine additional why these dumb issues proliferate and get so much help. If it is only human psychology, well, let's fight it: e.g. Mozilla has proven us that they'll do wonderful things given the precise message. If we're facing lively folks exploiting public credulity: let's identify and battle them. However, extra importantly, let's capitalize on this data and safe *our* programs, to showcase at a minimal (and extra later on in fact). Your reference conclusion is especially good to me. "challenge [...] the standard knowledge and the established order": that job I might happily settle for.Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Link]That rant is itself a bunch of "empty calories". The converse to the gadgets it rants about, which it's suggesting at some level, could be as unhealthy or worse, and indicative of the worst type of security considering that has put a lot of people off. Alternatively, it is just a rant that offers little of value. Personally, I feel there is no magic bullet. Security is and always has been, in human historical past, an arms race between defenders and attackers, and one that's inherently a trade-off between usability, risks and prices. If there are errors being made, it's that we should probably spend more sources on defences that might block complete lessons of attacks. E.g., why is the GRSec kernel hardening stuff so arduous to apply to regular distros (e.g. there's no reliable supply of a GRSec kernel for Fedora or RHEL, is there?). Why does your entire Linux kernel run in a single safety context? Why are we nonetheless writing a number of software program in C/C++, usually without any basic safety-checking abstractions (e.g. basic bounds-checking layers in between I/O and parsing layers, say)? Can hardware do more to provide security with speed? Little question there are plenty of people engaged on "block courses of assaults" stuff, the query is, why aren't there extra assets directed there?Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Link]>There are numerous reasons why Linux lags behind in defensive safety technologies, but one among the key ones is that the companies earning profits on Linux have not prioritized the event and integration of these applied sciences. This looks as if a cause which is absolutely worth exploring. Why is it so? I think it isn't apparent why this doesn't get some extra consideration. Is it possible that the people with the money are right not to more extremely prioritise this? Afterall, what interest do they have in an unsecure, exploitable kernel? Where there may be frequent trigger, linux improvement will get resourced. It's been this way for a few years. If filesystems qualify for common curiosity, surely safety does. So there would not appear to be any apparent purpose why this issue does not get extra mainstream consideration, except that it really already gets sufficient. Chances are you'll say that disaster has not struck but, that the iceberg has not been hit. However it seems to be that the linux growth process is just not overly reactive elsewhere.Posted Nov 10, 2015 15:53 UTC (Tue) by raven667 (subscriber, #5198) [Hyperlink]That is an fascinating query, actually that's what they really believe no matter what they publicly say about their commitment to safety applied sciences. What's the truly demonstrated downside for Kernel developers and the organizations that pay them, so far as I can tell there is not adequate consequence for the lack of Safety to drive more funding, so we're left begging and cajoling unconvincingly.Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (visitor, #4654) [Link]The important thing issue with this area is it pertains to malicious faults. So, when consequences manifest themselves, it is too late to act. And if the current dedication to an absence of voluntary technique persists, we're going to oscillate between phases of relaxed inconscience and anxious paranoia. Admittedly, kernel developpers appear fairly resistant to paranoia. That is a good thing. However I'm ready for the days where armed land-drones patrol US streets within the vicinity of their youngsters faculties for them to find the feeling. They are not so distants the times when innocent lives will unconsciouly depend on the safety of (linux-primarily based) laptop techniques; below water, that is already the case if I remember correctly my last dive, as well as in a number of recent automobiles in line with some experiences.Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Link]Traditional hosting companies that use Linux as an exposed entrance-end system are retreating from improvement whereas HPC, mobile and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel in their instructions. This is admittedly not that surprising: For internet hosting wants the kernel has been "completed" for fairly a while now. In addition to assist for current hardware there shouldn't be a lot use for newer kernels. Linux 3.2, and even older, works simply fine. Hosting doesn't need scalability to tons of or thousands of CPU cores (one uses commodity hardware), advanced instrumentation like perf or tracing (programs are locked down as a lot as attainable) or advanced power-management (if the system does not have constant excessive load, it isn't making enough money). So why ought to hosting firms nonetheless make robust investments in kernel development? Even when that they had something to contribute, the hurdles for contribution have change into greater and better. For their safety wants, internet hosting corporations already use Grsecurity. I haven't any numbers, but some expertise suggests that Grsecurity is principally a set requirement for shared hosting. However, kernel security is nearly irrelevant on nodes of a super computer or on a system running large enterprise databases that are wrapped in layers of middle-ware. And cellular distributors merely don't care.Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Hyperlink]LinkingPosted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Link]Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Hyperlink]The assembled doubtless recall that in August 2011, kernel.org was root compromised. I am positive the system's exhausting drives were despatched off for forensic examination, and we have all been waiting patiently for the reply to crucial question: What was the compromise vector? From shortly after the compromise was discovered on August 28, 2011, proper by April 1st, 2013, kernel.org included this notice at the top of the location News: 'Due to all to your endurance and understanding during our outage and please bear with us as we carry up the different kernel.org techniques over the next few weeks. We will be writing up a report on the incident sooner or later.' (Emphasis added.) That remark was eliminated (together with the remainder of the positioning Information) during a Might 2013 edit, and there hasn't been -- to my knowledge -- a peep about any report on the incident since then. This has been disappointing. When the Debian Mission found sudden compromise of several of its servers in 2007, Wichert Akkerman wrote and posted a wonderful public report on exactly what happened. Likewise, the Apache Basis likewise did the best thing with good public autopsies of the 2010 Web site breaches. Arstechnica's Dan Goodin was nonetheless trying to follow up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years in the past. He wrote: Linux developer and maintainer Greg Kroah-Hartman told Ars that the investigation has but to be accomplished and gave no timetable for when a report could be launched. [...] Kroah-Hartman additionally advised Ars kernel.org systems had been rebuilt from scratch following the assault. Officials have developed new tools and procedures since then, however he declined to say what they're. "There will be a report later this year about site [sic] has been engineered, however don't quote me on when it will likely be released as I am not chargeable for it," he wrote. Who's accountable, then? Is anybody? Anybody? Bueller? Or is it a state secret, or what? Two years since Greg Okay-H mentioned there would be a report 'later this 12 months', and four years for the reason that meltdown, nothing but. How about some info? Rick Moen rick@linuxmafia.comPosted Nov 12, 2015 14:19 UTC (Thu) by ortalo (visitor, #4654) [Link]Less significantly, word that if even the Linux mafia does not know, it should be the venusians; they are notoriously stealth of their invasions.Posted Nov 14, 2015 12:Forty six UTC (Sat) by error27 (subscriber, #8346) [Hyperlink]I know the kernel.org admins have given talks about a few of the brand new protections which have been put into place. There are no more shell logins, as an alternative everything makes use of gitolite. The totally different services are on totally different hosts. There are more kernel.org employees now. Persons are using two factor identification. Another stuff. Do a seek for Konstantin Ryabitsev.Posted Nov 14, 2015 15:Fifty eight UTC (Sat) by rickmoen (subscriber, #6943) [Link]I beg your pardon if I was by some means unclear: That was said to have been the trail of entry to the machine (and i can readily imagine that, as it was also the exact path to entry into shells.sourceforge.internet, many years prior, round 2002, and into many different shared Web hosts for a few years). But that's not what's of primary curiosity, and isn't what the forensic examine lengthy promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator in the August 2011 Dan Goodin article you cited: 'How they managed to exploit that to root entry is at the moment unknown and is being investigated'. Ok, people, you have now had four years of investigation. What was the trail of escalation to root? (Additionally, other details that will logically be covered by a forensic examine, comparable to: Whose key was stolen? Who stole the important thing?) This is the form of autopsy was promised prominently on the entrance web page of kernel.org, to reporters, and elsewhere for a long time (after which summarily removed as a promise from the entrance page of kernel.org, without remark, together with the rest of the location News section, and apparently dropped). It still can be acceptable to know and share that knowledge. Particularly the datum of whether or not the path to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen rick@linuxmafia.comPosted Nov 22, 2015 12:42 UTC (Sun) by rickmoen (subscriber, #6943) [Link]I've executed a better evaluate of revelations that got here out soon after the break-in, and think I've discovered the answer, through a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell users (two days before the public was informed), plus Aug. 31st comments to The Register's Dan Goodin by 'two safety researchers who have been briefed on the breach': Root escalation was through exploit of a Linux kernel security hole: Per the 2 safety researchers, it was one each extraordinarily embarrassing (large-open access to /dev/mem contents including the operating kernel's image in RAM, in 2.6 kernels of that day) and known-exploitable for the prior six years by canned 'sploits, certainly one of which (Phalanx) was run by some script kiddie after entry using stolen dev credentials. Other tidbits: - Site admins left the foundation-compromised Internet servers working with all providers nonetheless lit up, for a number of days. - Site admins and Linux Basis sat on the data and failed to tell the general public for those same multiple days. - Site admins and Linux Basis have never revealed whether or not trojaned Linux supply tarballs have been posted in the http/ftp tree for the 19+ days before they took the location down. (Yes, git checkout was positive, however what concerning the hundreds of tarball downloads?) - After promising a report for a number of years and then quietly eradicating that promise from the front web page of kernel.org, Linux Basis now stonewalls press queries.I posted my finest try at reconstructing the story, absent an actual report from insiders, to SVLUG's primary mailing checklist yesterday. (Essentially, there are surmises. If the folks with the facts were more forthcoming, we'd know what occurred for certain.) I do should surprise: If there's another embarrassing screwup, will we even be instructed about it in any respect? Rick Moen rick@linuxmafia.comPosted Nov 22, 2015 14:25 UTC (Sun) by spender (visitor, #23067) [Hyperlink]Additionally, it's preferable to make use of reside memory acquisition prior to powering off the system, otherwise you lose out on reminiscence-resident artifacts that you would be able to carry out forensics on. -BradHow concerning the long overdue autopsy on the August 2011 kernel.org compromise?Posted Nov 22, 2015 16:28 UTC (Sun) by rickmoen (subscriber, #6943) [Hyperlink]Thanks in your feedback, Brad. I might been counting on Dan Goodin's declare of Phalanx being what was used to realize root, in the bit the place he cited 'two safety researchers who were briefed on the breach' to that impact. Goodin also elaborated: 'Fellow security researcher Dan Rosenberg said he was also briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the primary time I've heard of a rootkit being claimed to be bundled with an assault tool, and i noted that oddity in my posting to SVLUG. That having been mentioned, yeah, the Phalanx README does not particularly declare this, so then possibly Goodin and his a number of 'security researcher' sources blew that element,