This is nuts. Some contributions have made it into the Kernel already. All contributions from UofM are being removed though, and are being barred from contributing any Kernel patches in the future.
With the disclaimer that I haven’t deep-dived this and there’s plenty of information yet to be revealed…
This appears to be a complicated situation directly comparable to the gray area in ethical hacking. Depending on the exact details of how the group conducted the research including how they planned to disclose, how unnecessary the risks were to achieve the objective and what their intent was there may be a ton of justifiable criticism against the group.
However, outright banning the entire university for the actions of a group and discrediting a “secret shopper” approach to testing kernel security is a product of ego, not concern for real-world security.
Does Greg expect a heads up from hackers and gov’t actors prior to them pushing bad code? There’s a place for structured pentesting but security is about anticipating the unknown just as much as the known. Organized pentesting has to be mixed with the unexpected. There’s no excuse for the World’s most important software to not accept both.
Personal vendetta’s are not an adequate response to being caught with your pants down. This was an opportunity to broadly improve security and recommend a better structure for pentesting holes in their internal review but the Kernel seems to be blowing it so far.
I’m glad that these were spotted and rejected. It makes me feel confident in the QA of the kernel.
I don’t think that they need to be given the heads up about this sort of thing because anyone who’d want to actually mess with the kernel would not have that kind of courtesy either.
The patches that were added to the kernel though are never mentioned as being one of the intentionally bad ones or not.
The issue is in the morality of how they went about this. If those patches had been accepted, would UofM come out and say it was just a test and the kernel team failed? Or would they silently let them go into the kernel and let that cause problems for millions of people.
When Greg says they caught these problematic patches and asked them to stop, the test is over, but UofM acted like they were the victim and continued their actions. What’s the point of that? Everyone knows what you’re doing now, its no longer a test and proves nothing.
I’d argue those flaws have to be accepted in order for the pentest to mean anything and if they gave a prior heads up for where the problems were it’d make the result extremely easy to mitigate (ex: “hey team, be extra vigilant this week”) or cover up all together.
That’s were i’d like more information, though considering this was done with plenty of evidence i’d suspect they probably intended to disclose no later than what’d be necessary considering the legal ramifications.
Human engineering is the least anticipated form of pentesting and it’s used constantly in the wild. If someone can push vulnerable code into the kernel by using a shotgun approach and complaining about how new devs are treated that’s a hole that has to be patched.
Getting customer service on the line and just complaining until you get what you want is a common exploit. However gross it is to claim a handicap, a bad actor or nation state will ramp that up 10x.
Just to bring this back around, i’m not saying they did this perfectly, just that I think there’s more to it.
I dont think it is a personal vendetta, it feels more like an exemplary punishment to send a message. You can push bad code but only in good faith.
This really needs a deep dive (which I don’t have time for). I’ve been discussing this and i’m hearing there was disclosure but not responsible disclosure and it was revealed in a needlessly nasty way.
I don’t know if the Kernel team is planning to release a recognition of the vulnerabilities discovered or a plan to fix them other than ignoring Minnesota university commits.
I also shouldn’t expect every dev on the Kernel team to be a shining examplar 24/7 during a pretty crappy series of events. This is an open conversation mailing list between devs, not an official statement by the Kernel team or an accurate summary of Greg’s full stance.
Fair enough. I’d like to find out more but I’ll have to find a solid tutorial on how the heck mailing lists work
The Open Source Community and Greg react like a fanatic religious sect, whose main dogma has been attacked. All Open Source believers unite to throw the university off the cliff and ban them for life from the holy church. Instead of solving the issue, proven by the university, they blame and kill the messenger.
There has got to be a better way to socialize the rules and goals of a test like this with the targeted project.
Yeah. Even with the most positive reading of the situation, sending malfunctioning code in this way is still very bad idea.
It creates additional work for people working on the kernel and checking this kind of code can be quite demanding task. What’s worse, if code got integrated it could affect many machines using it – although, hopefully by that time, contributors would inform people working on kernel about it.
The fact that positive side of this situation could be achieved by simply working with the Linux team, where people doing the review wouldn’t know the contribution is bugged, but some other people working on the code would, makes this pretty one sided for me.
This is indefensible. No different than your doctor secretly giving you a different medicine than the one prescribed. Even if the doctor believes the secret new medicine it is a miracle cure, I suspect everyone here can agree the doctor would be terribly wrong. What UMN did is unethical, violates all ethical research standards, and the reaction of the kernel team was, in my opinion, not strong enough.
Agreed. If they want to test, then they need to create their own lab for their testing requirements. This is a simple example of them deflecting the cost (of the lab, environment, developers needed, et al) onto someone else.
List of resources:
Open Letter: An open letter to the Linux community
Credit: Brodie Robertson
When I first read about this security incident, I thought this was just a college security class performing blue team work. However, as I read deeper into the details, this is beginning to look more like red team work.
They say their goal was to improve the security, but it appears that they were looking not only to identify a vulnerability in the patching process, but were actually intending on exploiting the vulnerability. They say they were not, but they got caught in the process. I wonder what would have happened if they had not gotten caught.
The method in which this was done shows a pretty serious lack of judgement. It surprises me that this came from an educational institution. The response from the school was also surprising. They acted like they had no idea about any of this, which might be true, but to think that the school would give a professor that much latitude over something external to the school just shows more lack of judgement.
I almost closed the tab of my browser when I read their first recommendation, an update to the code of conduct. What class was this for?
Again, the subject is worthy of research but this type of testing should be performed in a closed lab, not a production environment. That’s IT 101 right there.
I’ve been desperate to hear from someone in security on this topic and i’m extremely grateful to Steve Gibson for this one, i’d consider this a must watch.
Ask Noah with a good deep dive and another angle.
I’ve had quite some time to think about this and i’m very grateful for the luxury of conversing with and hearing from some very intelligent people on and off the forum. Something about this whole thing has felt very off and it’s taken me some time to work through it.
I’ve changed my mind. I no longer believe the actions of the pentesters were ethical.
I’m highly protective of people breaking rules/laws/consent when a serious problem needs sunlight. An excellent example is Edward Snowden who illustrated just how dangerous it is to use these lines as moral absolutes.
A similar example is a pentester who broke the Internet Fraud and Abuse Act by abusing Google’s API to record calls going to the FBI and Secret Service. After providing the recordings to the Secret Service they could have easily had their cake and eaten it too by jailing him… but they didn’t press charges out of thanks and the damage it’d do by chilling pentesting.
So why did I change my mind? Snowden had witnessed every legal attempt to blow the whistle fail and the NSA chief lied to his elected Gov’t. Seely didn’t exhaust every option but at least exhausted them with Google. The university pentesters didn’t treat rule breaking as a last resort of desperation, it was their first resort. Even in something as immeasurably critical as testing Kernel security, it’s more critical that all reasonable attempts are exhausted first before even considering an alternative.
How this is being talked about scares me.
I’m not talking about small discussions like these where friends all come together for a messy discussion to improve our personal understandings and whether or not we all agree in the end isn’t relevant.
I mean how most major publications and big FOSS voices are evoking emotionally charged righteous condemnation while discussing this in binary terms as if there’s no situation in which justification for what these pentesters did can exist. Laws, rules and consent discussed in emotional absolutes is scary business.
As for the findings which Security Now termed critical… the Kernel team isn’t taking ownership (that I know of publicly) and the Linux media isn’t pressuring them to, so whatever fangs they would have had in requesting assistance and drumming up support have been exchanged for saving face and punishing the unethical pentesting methods with a failed reveal though at the expense of everyone.
Just a few thoughts and where i’ve gotten to so far.
When pentesting, the most critical aspect is don’t ever lift a finger without a statement of work or scope detail that provides written permission to perform only the things listed in said document. This is effectively known as the proverbial “get out of jail card” within the security community. It is the ethical line that one does not step across.
That being said, the security issues around kernel development, or that of any software package, should not be taken lightly. Any chance at improvement should be considered. On top of that, audit processes, if not already in place, should be put in place. In today’s time, we have software that can scan code for issues and development processes like those with SecDevOps that can limit the exposure.
I’m not associated with any Linux development so I can’t speak to their state of security, but if the faculty/students at the school have concerns, then an open conversation should started. I do not know to what extent this was done or even if it was attempted, but I believe that one should first seek to understand before raising a security concern. If you don’t open a conversation, how do you know you have all of the information you need?
I agree that what they did was wrong, but I don’t think it represents a simple mistake. It was a huge step across that ethical line and because of that I understand the reaction. Let’s see what happens next between these two groups. Either they find a way to communicate or they continue to build a wall between each other.
This is a very interesting “saga”. To me, the research conducted reflects some aspects of the Solarwinds attack. Although, the UMN paper was focusing on a process with submitting Kernel code, and Solarwinds exposed poor authentication exploited by state actors, the impact and the threats are on the same level.
Looking at discoveries of vulnerabilities existing in the Linux Kernel for years if not decades, and the vast number of contributors, there is a real potential for malicious exploitation of open source source projects by criminals/organized threat groups/state actors. I wouldn’t need this paper to tell me the commit process is not 100% fool proof.
I understand the hypocrite commits were discovered prior to implementation, and other commits that “were not related to the research” did have implementations that just did not add up for the Kernel Team. I also understand that the commit sources have a blurred distinction of who actually was behind them (student or professor.) These actions led to the Kernel Team to reinforce their concern with the authors of the research paper and the heads of the Computer Science department at UMN. Whether intentional or not, UMN faculty and students did persistently challenge the Kernel Team’s expressed concerns of their commits and conduct.
It is great that the other UMN commits are being pulled for review, as well as their inability to contribute future commits. The more eyes the better. I just wish we had more eyes to review more commits. I hope this whole scenario is seemingly more paramount than what was actually intended by UNM and the research in question, but it is good that more awareness of these threats are on the communities radar of concern. I hope, if anything else, this inspires more eyes to review more of the Kernel code to help weed out other long-time existing bugs. Scrutinize all the things~