> The results speak for themselves. The team set up the new camera on the 20th floor of a building on Chongming Island in Shanghai and pointed it at the Pudong Civil Aviation Building across the river, some 45 km away.
The image right beneath this paragraph [0] contradicts it. Does anyone know the reason for this discrepancy?
Interesting image. With 2 minutes of fiddling with Levels tool of Paint .NET I can resolve c) to the level of details from e) / f). The necessary bits of information about the object are already present in c).
As a bit of a layman, is there even any legitimate reason at all (other than a user installing it in their own machine for reverse engineering purposes) for anyone to install a root certificate anymore?
I could understand it if it was a small company doing so at the time when certificates were expensive, but Sennheiser has plenty of money and certificates can be obtained for free nowadays.
Nobody will issue Sennheiser a certificate for this purpose. Every so often a company abuses a cert they were issued to do what Sennheiser wanted to achieve here (local loopback HTTPS) and when they're caught the cert is revoked and they get a slap on the wrist. Blizzard is a recent example.
The Right Thing (TM) is to not do HTTPS, a modern web browser is supposed to conclude that ::1 and 127.0.0.1 are secure without HTTPS since there is no possibility of a "man in the middle" of your own computer's loopback.
If you want an arbitrary (thus https based) website to be able to communicate with a localhost server using websockets you are forced to use https on the localhost server. This is because the browser won't connect to non-secure websockets from an https website, even if the websocket is to localhost.
The actual right thing to do is to generate a private key and certificate (for a specific, public name you point to 127.0.0.1) during the software installation and add the latter to the trusted store. Now you don't have this vulnerability because each computer has a different trusted certificate with a different key, so a random attacker cannot just use the key they got to spy on other users.
I run some services for my private use. It's crazy that I need to have them certified by some third-party over-seas CA since I can't get my own devices to trust my own certificates.
We're not at that point yet, but running your own trust root is getting quite annoying. For example, Android constantly nags about "network might be monitored" when custom certificates are installed.
As far as changing the certs, I know offhand to do it with a couple random linux distro's but i'm not 100% sure for android, you might just try searching the repo for the default certs then looking at how they are built into the image and tweaking that.
As someone that has been burned by self signed internal only sites. Take the extra 15 minutes and get a proper cert, and domain name for your internal sites. It can save a massive amount of pain later.
Just hope you never need an external system, or a cloud hosted service to talk to your dev/test environments. It’s so cheap and easy to do it right that it just doesn’t make sense to do it the other way.
In this case we started doing hybrid cloud, we were unable to address a ton of sites since they were on a made up internal only tld. Plus every thing we could address served up certs we couldn’t trust since we were utilizing services that didn’t allow us to modify the trusted root cert store.
We saved probably $100 and 2 hours by rolling our own solutions instead of doing things the standard way. It took weeks to clean the mess completely up.
Well, yeah, using a made up domain is obviously a bad idea ... but what does that have to do with root CAs? And how does trusting your own root CA lead to not trusting the certs presented by other parties? I don't really understand what kind of scenario you are describing there.
And no, I don't see anything "standard" about not running your own CA, it is perfectly standard as far as I am concerned, and a really good idea as well. Relying on an external CA for internal services just creates risks of both availability and security. If you need an external CA to set up or continue operating internal services, that is an availability risk, and if you trust the whole standard set of root CAs for all of your internal services, that's a massive security risk.
Obviously if all your services are hosted in house and you will never need to expose internal services externally go for it. But as soon as your organization grows, splits, merges or starts utilizing other services that don’t give you access to the trust store you are boned. It screwed us, and was a giant pain to fix.
Why would all your services have to be hosted in house and why would it prevent you from "exposing internal services" (I mean, apart from the fact that they kindof aren't internal services anymore from that point on)?
For one, there is no problem hosting your own services elsewhere and having them use your own certificates. But more importantly: Why should your own CA prevent you from obtaining certificates from an external CA for external services? I mean, it just doesn't, that's how I run stuff: Purely internal stuff runs on internal CA, stuff that needs to face the public somehow runs on globally recognized CAs. And it's mostly trivial to switch services from one to the other - or to just run two endpoints, one using the internal CA, one using an external CA.
It seems to me like your problem wasn't your own root CA, your problem was that your services were incompatible with external CAs for some reason, among them probably your private DNS root? But that isn't a reason why you should put your internal services at risk from mismanaged public CAs, that's simply a reason why you should use a global domain and support provisioning of certificates from external CAs.
The big issue was identifying all the impacted services, reconfiguring all of them testing and redeploying them. If it’s a few services fine. But once it’s a few hundred it’s a pain.
Well ... but then that still has nothing to do with using your own root CA, does it? I mean, why would you want to suddenly reconfigure all of your services to use a different CA? It might come up here and there that you need external access to some service hat was internal before, but that is hardly a huge problem to reconfigure?!
And also, if you have so many services running that swapping out all of the certificates is a major headache, your primary mistake probably was that that wasn't automated? When keys are compromised, you should be able to reprovision anyway.
It can be a real pain in the butt to go up and down the whole stack and reconfigure every library and application that might detect a insecure connection and bail. Need several independent webapps to communicate? Hoo boy.
Who should have the ability to install root CA certs?
I like using HTTPS instead of HTTP, so I need some installed. Who should be responsible for managing them?
And before you say the CABF, they're ultimatately not the ones who decide what gets installed on your computer. The answer to that question is much more complicated.
There was never a good reason to install a root certificate for the purpose of speaking securely to your own gear or web services. In that case, you don't need to add them to the root trust store, you just need to create an SSL context that has those certificates set as trusted.
However, the problem is, nobody understands SSL properly. Among the people who don't understand SSL properly is, alas, a number of people writing SSL libraries. Not the actual SSL libraries like OpenSSL, but all the surrounding libraries to make it "easy to use", which includes libraries that try to make HTTP easy to use and abstract the difference between HTTP and HTTPS. Pretty much every "ease of use" library I have ever seen accomplishes its ease of use by dropping features from the underlying SSL support, but it's clear they often don't understand why those features were there and the consequences of dropping them.
I have a particular case where I've got some Perl code that is literally 4 levels deep in "SSL ease of use" code, with pretty much every layer dropping SSL features along the way (even the base level Net::SSLeay is missing a lot of stuff once you go beyond the basics, and it only gets worse as the stack gets higher). Once I had to poke support for a certain feature all the way down through the entire stack because it got dropped really early.
So what you need to do in this case is create your own root store of trust, and then stick your own certs in there from some .pems or something, and then use that to initialize SSL on your connection. But it's complicated to do that at the base level of a lot of SSL libraries, and this is often one of the first features to be "abstracted away" by support libraries, and a lot of HTTP libraries end up trying to "abstract away" SSL so thoroughly that they don't even have parameters to control the SSL elements of the connection, and if they do, they have some ad-hoc selection of random bits that someone once needed, rather than comprehensive support.
The upshot is that I'd consider it likely that they were using one of these libraries, and the only way they could see to get their certificates trusted was to stick them in the default root store, because that's the only thing that would work with such libraries. You can also find web pages and such recommending this approach. It's also possible they just came across one of these web pages or something and put stuff in the root without realizing what it really meant, even though their library allowed them to do everything I said, because it's way easier to slam something into the default trust store than write the code to create your own at run time. (It ought to be easy to write that code. The rather nice Go TLS library makes it almost as easy as I say it is; create cert store, add certificate, set that as your root trust, modulo a bit of error handling it really is just about that many lines of code. But the "ease of use" libraries can really get in the way, when the people adding the "ease of use" abstractions themselves don't understand what that means, or why you might want/need it, or how to make it easy.)
what you need to do in this case is create your own root store of trust, and then stick your own certs in there from some .pems or something, and then use that to initialize SSL on your connection
Their goal was to allow the browser to connect to the local daemon, for web-based softphones, so this wouldn't work.
Yup, I was reading through thinking there was some major update to Kitty and was a bit confused when the page said it was only available for Linux and Mac.
The key here is that this becomes easier as you do it. It's hard fighting off those few first thoughts, but I personally find that once I do, my brain is put into slow gear and fewer thoughts come up. This has an accumulating effect and I'm asleep before I know it.
Well, sometimes the average is 0, e.g. (-2 + 2)/2.
And the whole point is that this is meant to catch unforeseen interactions as soon as possible. If you add a check it's no longer unforeseen, and it may easily slip the programmer's mind.
But probabilistically doesn't it make sense to skip insurance altogether?
If insurance manages to turn a profit despite having overhead expenses (salespeople, infrastructure, lawyers, etc) and assuming they don't have a special discount in whatever they're ensuring then there are more people paying without using than there are people who need it.
If I'm an average or above average driver, for example, then it doesn't make sense to have insurance, does it? Wouldn't it make more sense to save the money I'd otherwise be using for insurance and pay myself in case something happens? That way my money would go only towards my problem and not towards worse drivers and insurance company expenses.
Is the product that insurance offer really just peace of mind?
Have a look at the expected utility Wikipedia article. In short:
The difference between owning a total of 0 and a total of 10000 USD is way more important than the difference between owning 10000 and 20000 USD.
So assume you have 20000 USD. There's a 10% change that you will lose it all and become homeless. It makes sense for you to pay 2000 USD at the start to get 10000 USD back in case that happens. Because being homeless in 10% of the cases is way worse than having 18000 USD instead of 20000 USD in 90% of the cases.
If I were trying to explain insurance to a person from Mars, I guess I would say humans will pay a premium to reduce variance in cash flows. This is a win-win situation, not a case of one party taking advantage of the other. It's one of the basic services that underpins civilization.
> If I'm an average or above average driver, for example, then it doesn't make sense to have insurance, does it?
You’re waiting at a red light when a drunk driver smashes into you. Your skills as a driver can’t affect the probability of this. It may not be your fault, but someone needs insurance.
rsync has always been my favourite because it makes the most sense to me (and the --help/man page is easy to read).
rsync -n -avh --progress source destination:~/asdf/ for a dry run followed by ctrl-p, ctrl-a, alt-f, alt-d, alt-d to remove the -n flag and then execute that for the real thing.
Occasionally though, I'll also use sftp if I'm just pulling one thing - perhaps even after sshing to the remote machine.
For all of these, SSH keys should be set up (and desktop logins secured) to make life easier.
As for Android, adb push and adb pull -a seems to work better than mtp:// or AirDroid in my experience.
If you think of it in terms of archives and whether you want to "extract" into the current directory, or a new directory within the current one; that might help.
rsync source destination will plonk the entire source directory and put it inside destination as a neat bundle.
rsync source/ destination will take the contents of source (but not the directory source itself) and plonk it in destination
I found the info page a little dry but it does describe it succintly:
Recently used it to copy half a terabyte of stuff on my home network. Unsure about the exact specification but it supported the same flags as cp as far as I could tell.
> * If I make some claim during class that is not accurate, the students - who, again, are all professional engineers in the work force - will eat me alive.
This is interesting. I imagine that in high-level training for any industry this could happen. I'd personally try to let my students know that I'm not an all-knowing god and ask them to correct me if they noticed something I said was wrong, and I'd be happy learning something new. Maybe this is what you meant, but were you ever met with hostility or animosity because of this?
Also, what, in general, is intermediate-to-advanced Python?
No hostility ever, no, because (a) I freely acknowledge I don't know everything, and (b) I'll not hesitate to say if I don't know something or I'm not sure (or become unsure). People are of course really understanding and good-natured about it. "Eat me alive" is an overstatement; better to say they won't let it slide.
> Also, what, in general, is intermediate-to-advanced Python?
Basically the kinds of topics covered in my Python book, which I'll plug here without shame:
> Even something as simple as installing a library is a conceptual leap for these people (why wouldn't the software just come with everything needed to work?).
> Have you ever tried explaining the various python package and environment management options to someone with a background in Excel/SQL?
I don't understand the difficulty I've often seen voiced against this. Why would a newbie or someone who just wants to get analytical work done need anything beyond installing Python and doing `pip install library`? It's certainly orders of magnitude easier and faster than, say, using a C library. The only trouble I can see a newbie running into is if they want to install a library which doesn't have precompiled wheels and they need some dependencies to build it, but that's rarely an issue for popular packages.
Pip install needs root on my ubuntu install, my lab's and university's old redhat servers and my windows for linux install. I've had to install anaconda python to get any real work done on all three systems. Anaconda works fine for me but I've not even had to think about anything to install packages in R.
Ubuntu doesn't ship with pip or virtualenv. In fact it ships with a version of Python where the built-in equivalent to virtualenv, pyvenv, is explicitly disabled.
So you have to install extra Python packages, as root. You have to have that Python experience that guides you to install as few of them as you can, just enough so you can get started with a virtualenv, so you don't end up relying on your system Python environment.
And this is really hard to explain to people who aren't deeply familiar with Python. "Never use sudo to install Python packages! Oh, you got errors. We obviously meant use sudo for two particular packages and never again after that."
In the terrible case where you don't have root, you have to ignore Ubuntu's version of Python and compile it yourself from scratch. Hope the right development libraries are installed!
Maybe I'm wrong and there's a method I've overlooked. If there is: please show me how to install a Python package on a fresh installation of Ubuntu 16.04, without ever using sudo, and I will happily spread the good news.
That sounds like a major problem with Ubuntu, rather than with Python or pip.
On Windows, meanwhile, the standard Python installer gets all this set up properly in like three clicks. Better yet, because it installs per-user by default, "pip install" just works. And if you still choose to install it globally, it will fail, but it will tell you exactly what you need to do to make it work:
Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: ...
Consider using the `--user` option or check the permissions.
One can't help but wonder how we ended up in a situation where the most popular Linux distro somehow does Python worse than Windows.
Don't despair, in the Anaconda installed with visual studio (now a default) you can't update or install packages without being admin! And if you install Anaconda again it merges the start menu entries and you can't tell which is which...
Eh, that has always been the case for windows vs linux, that you don't have to compile anything yourself because there is always an installer that will deploy precompiled binaries for whatever you want to install (except for when there isn't, because nobody has compiled it for windows, at which point you're in deeper shit) (or except when something installs itself but doesn't update your envars, so you have to do it yourself, which kind of defeats the purpose of the whole "installer" thing).
Iiish. For small projects or when you want to get development versions etc that are not in a distro's repos it's pretty common to have to do a make-configure.
Then again, with Python in particular, I have often had errors either with pip-install, or after "successful" installation, for various reasons.
In this case, we were talking about Python itself. I don't see any particular reason why most people should need to build it themselves, whether on Windows or on Linux. Packages are another matter, but here the issue is the way Python itself is packaged on Ubuntu.
Not on a personal computer, no, but the vast majority of managed systems won't let you install anything outside of your home directory. Of course you could install using `pip install --user` but you will inevitably run into problems when something you install locally needs an updated version of something installed on the system.
Makes it fun when running on a VM in the cloud which only has a root user. Docker becomes almost essential to preventing errant Python scripts fudging up the system.
While you're right that it's bad advice, it also highlights the problem with pip that these less experienced people have. The ideal way to deal with Python packages is virtualenvs, but setting up a virtualenv, and then activating it every time you want to use it (or setting up tools to do it for you) is an incredibly huge headache for less experienced people to deal with. R doesn't require that whatsoever.
Neither language requires an isolated dev environment, but it can help with avoiding headaches. As python has things like virtualenv and buildout, fortunately R has 'packrat' available, which provides a similar isolated/reproducible dev environment solution.
You can certainly update multiple packages at once using pip. Just use a requirements.txt file, which you should be doing anyway if you're using multiple packages (or just want to be able to reproduce your environment).
>> Why would a newbie or someone who just wants to get analytical work done need anything beyond installing Python and doing `pip install library`? It's certainly orders of magnitude easier and faster than, say, using a C library.
Except when it isn't. For instance, because some wheel fails to build because you're lacking the VC++ redistributable (or it's not where pip thinks it should be):
C:\Users\YeGoblynQueenne\Documents\Python> pip install -U spacy
Collecting spacy
Downloading spacy-1.2.0.tar.gz (2.5MB)
100% |################################| 2.5MB 316kB/s
Collecting numpy>=1.7 (from spacy)
Downloading numpy-1.11.2-cp27-none-win_amd64.whl (7.4MB)
100% |################################| 7.4MB 143kB/s
Collecting murmurhash<0.27,>=0.26 (from spacy)
Downloading murmurhash-0.26.4-cp27-none-win_amd64.whl
Collecting cymem<1.32,>=1.30 (from spacy)
Downloading cymem-1.31.2-cp27-none-win_amd64.whl
Collecting preshed<0.47.0,>=0.46.0 (from spacy)
Downloading preshed-0.46.4-cp27-none-win_amd64.whl (55kB)
100% |################################| 61kB 777kB/s
Collecting thinc<5.1.0,>=5.0.0 (from spacy)
Downloading thinc-5.0.8-cp27-none-win_amd64.whl (361kB)
100% |################################| 368kB 747kB/s
Collecting plac (from spacy)
Downloading plac-0.9.6-py2.py3-none-any.whl
Requirement already up-to-date: six in c:\program files\anaconda2\lib\site-packages (from spacy)
Requirement already up-to-date: cloudpickle in c:\program files\anaconda2\lib\site-packages (from spacy)
Collecting pathlib (from spacy)
Downloading pathlib-1.0.1.tar.gz (49kB)
100% |################################| 51kB 800kB/s
Collecting sputnik<0.10.0,>=0.9.2 (from spacy)
Downloading sputnik-0.9.3-py2.py3-none-any.whl
Collecting ujson>=1.35 (from spacy)
Downloading ujson-1.35.tar.gz (192kB)
100% |################################| 194kB 639kB/s
Collecting semver (from sputnik<0.10.0,>=0.9.2->spacy)
Downloading semver-2.7.2.tar.gz
Building wheels for collected packages: spacy, pathlib, ujson, semver
Running setup.py bdist_wheel for spacy ... error
Complete output from command "c:\program files\anaconda2\python.exe" -u -c "import setuptools, tokenize;__file__='c:\\users\\yegobl~1\\appdata\\local\\temp\\pip-build-7o0roa\\spacy\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'ex
ec'))" bdist_wheel -d c:\users\yegobl~1\appdata\local\temp\tmpypkonqpip-wheel- --python-tag cp27:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-2.7
creating build\lib.win-amd64-2.7\spacy
copying spacy\about.py -> build\lib.win-amd64-2.7\spacy
[217 lines truncated for brevity]
copying spacy\tests\sun.tokens -> build\lib.win-amd64-2.7\spacy\tests
running build_ext
building 'spacy.parts_of_speech' extension
error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat). Get it from http://aka.ms/vcpython27
----------------------------------------
Failed building wheel for spacy
Running setup.py clean for spacy
Running setup.py bdist_wheel for pathlib ... done
Stored in directory: C:\Users\YeGoblynQueenne\AppData\Local\pip\Cache\wheels\2a\23\a5\d8803db5d631e9f391fe6defe982a238bf5483062eeb34e841
Running setup.py bdist_wheel for ujson ... error
Complete output from command "c:\program files\anaconda2\python.exe" -u -c "import setuptools, tokenize;__file__='c:\\users\\yegobl~1\\appdata\\local\\temp\\pip-build-7o0roa\\ujson\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'ex
ec'))" bdist_wheel -d c:\users\yegobl~1\appdata\local\temp\tmp8wtgikpip-wheel- --python-tag cp27:
running bdist_wheel
running build
running build_ext
building 'ujson' extension
error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat). Get it from http://aka.ms/vcpython27
----------------------------------------
Failed building wheel for ujson
Running setup.py clean for ujson
Running setup.py bdist_wheel for semver ... done
Stored in directory: C:\Users\YeGoblynQueenne\AppData\Local\pip\Cache\wheels\d6\df\b6\0b318a7402342c6edca8a05ffbe8342fbe05e7d730a64db6e6
Successfully built pathlib semver
Failed to build spacy ujson
Installing collected packages: numpy, murmurhash, cymem, preshed, thinc, plac, pathlib, semver, sputnik, ujson, spacy
Found existing installation: numpy 1.11.0
Uninstalling numpy-1.11.0:
Successfully uninstalled numpy-1.11.0
Running setup.py install for ujson ... error
Complete output from command "c:\program files\anaconda2\python.exe" -u -c "import setuptools, tokenize;__file__='c:\\users\\yegobl~1\\appdata\\local\\temp\\pip-build-7o0roa\\ujson\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, '
exec'))" install --record c:\users\yegobl~1\appdata\local\temp\pip-ibtvwu-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_ext
building 'ujson' extension
error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat). Get it from http://aka.ms/vcpython27
----------------------------------------
Command ""c:\program files\anaconda2\python.exe" -u -c "import setuptools, tokenize;__file__='c:\\users\\yegobl~1\\appdata\\local\\temp\\pip-build-7o0roa\\ujson\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --recor
d c:\users\yegobl~1\appdata\local\temp\pip-ibtvwu-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\yegobl~1\appdata\local\temp\pip-build-7o0roa\ujson\
Now that's newbie scary.
Note that this is just one case where I was trying to install one particular package. I got a couple more examples like this in my installation diary, notably one when I tried to install matplotlib, this time on Windows Subystem for Linux, a.k.a. Ubuntu, and hit a conda bug that meant I had to use an older version of QT until upstream fixed it and other fun times like that.