If I knew what CI/CD was I might be able to know how this could be
helpful. I also don't know what code coverage is or how one could do
automatic testing of it.
CI/CD = continuous integration (CI) and continuous delivery (CD)
See this Wikipedia article for more details:
https://en.wikipedia.org/wiki/CI/CD
A few simple examples:
Your app runs on Windows, macOS and Linux. Someone commits a change. A build is automatically kicked off. When one of the platforms fails to compile an email is sent out to the person who did the commit.
Code is checked in. The automatic build systems compiles and everything is good. However one of the automatic tests that do fuzz testing fail. An email is sent to the person who did the commit that the code is unsafe.
As programming has evolved from being done by hobbyists to professionals and as software development has matured from being done be code monkeys to software engineers it has allowed us to write more and more complex programs. This increased complexity has caused an combinatorial explosion of testing. There is a movement to automate testing and "fail fast" -- catch as many mistakes as early as possible.
CI/CD is NOT a silver bullet. It just another tool to help catch bugs proactively and reactively. One could certainly waste all your time setting up automatic testing and have nothing to show for it.
This is why statically-typed languages such as C/C++ are god and dynamically-typed liked languages such as JavaScript are shit for production. In JavaScript you can misspell variables names and not even know it unless you use the stupid "use strict"; hack. It is BASIC all over again. The industry is littered with badly practices because there was "never time to do it right."
Figuring out *when* a build breaks is easy. It is when you compile the
project and it doesn't work properly. Yes, this is an over
simplification as problems may not be noticed until several versions
down the line. Figuring out *why* it doesn't work is the hard part.
Generally I agree with this 99% however "breaks" is a bit of complicated subject. There are many types of failures (loosely from simplest to hardest):
* code fails to compile -- trivial to catch
* code compiles on one compiler, doesn't one another one -- sometimes requires being a language lawyer, working around different language support, etc.
* code compiles, but automatic tests fail
* code compiles, automatic tests pass, but there are regressions (new or old)
* a new driver breaks working code
* hardware errors causing a failure. Drive dies, becomes full, etc.
What makes programming SO time consuming is dealing with all the edge cases while not ending up in pedantic hell of trying to catch every single one.
But yeah, you spend days tracking down a bug and then 30 minutes to fix it. /s
git has an amazing tool called git bisect. You can use it to track down:
* when a bug got introduced, or the "inverse"
* when a feature got introduced.
still programming for a living and he keeps telling me about the awful things he has to put up with because the people he is programming for want them done that way.
Yup, that is a common problem in the industry. Many old software engineers get tired of the same old bullshit and leave for more mature fields. I don't blame them.
Management is part of the problem:
* clinging to out dated methodologies,
* don't want to invest in new tools or methodologies,
* don't want to invest in the R&D to innovate
* change the product for the sake of change trying to justify their jobs
It doesn't help that you have fads like Agile and then everyone misses the entire fucking point by having useless daily standup meetings that drag on for an hour.
Another great tool is static analysis. There are amazing at catching all sorts of "hidden" bugs. If you are a professional and not using one you are being an amateur code monkey. Coverity is one of the bigger ones. PVS-Studio is amazing for catching inconsistent naming and typos. The fundamental problem is ALL (useful) software has bugs. The problem isn't the bugs you know about but the ones you don't know about.
Wikipedia has a great list:
https://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis
Part of being a great software engineering is knowing what old methodologies to keep and what new ones to embrace.
Looking at mame's page on GitHub ...
https://github.com/mamedev/mame
... I see it has 1600 forks. I'd say that is a pretty successful project of collaboration!
If some developer doesn't want to learn how to use git+GitHub then it is hard to take their whining seriously when so many other developers understand the benefits of using modern tools.
For Nox Archaist we used git+GitHub and it was REALLY nice.
One of THE biggest problems projects have, especially Open Source ones, is what I call "build friction". A project should EASILY be able to be built "out of the box". If it requires jumping through a dozen hoops just to build the damn thing chances are people aren't going to waste their time trying to figure this shit out. For starters there should be either a "make" (for Un*x systems) or a Visual Studio Solution file (for Windows projects) that "just compiles". The lower the barrier of entry for building a project the easier it is to get people to contribute.
That's why I think people who make excuses for "git is too hard" are being extremely short-sighted. They don't want to invest the 10% effort to get the 90% benefits so they waste their time doing 80% more work.
Popularity is usually a shitty metric for quality (the billions served by McDonalds is the perfect example), but in git's case it is 100% accurate. It is very rare for a single programmer to redefine the entire software industry, twice, let alone once.
m.