And here, I’m stating my reasons for the answer I provided (which is “yes”, by the way)
We live in a world where terrorists, those involved in organised crime, cyber criminals, and paedophiles all hide behind the very tools we created to ensure the privacy of our data - thus known as encryption. There have been a number of cases involving terrorism where law enforcement needed access to messages stored on encrypted devices, and had to go to (in some cases) extraordinary lengths to obtain access to that same data. Such examples of this are the San Bernardino terror attacks, the Westminster terror attack and many more.
Clearly, attacks such as this take coordination, and planning. Planning, where is has been proven that the chosen mechanism of communication are those channels that support encryption. WhatsApp and iMessage are two of many platforms that offer this as a standard, but these communications where the content is designed to cause widespread harm to human life (in my view) will always negate the right to privacy. What I mean by this is that I am a firm advocate of using Artificial Intelligence or Machine Learning to process these communications, and if any suspicion of terrorist, criminal, or otherwise illegal (for example, grooming) act is present, then the message should be flagged to law enforcement in order for them to take the appropriate action.
Whilst my stance here obviously contradicts my belief of the right to privacy, I firmly believe that there are circumstances where such a right should not be given. The bottom line is that privacy is being abused for nefarious purposes - we need to find the balance where it can be used to protect innocents, the intended victims of crime, and at the same time, unmask those who seek to hide behind the very veil we created - encryption.