Censorship

Online censorship is starting to become a bigger and bigger problem with the boom of the internet and the continued use of technology in an increasing number of areas of out lives. As Google searches and YouTube how-to videos become more prevalent, so does the number of ideas floating around. Censorship is the hiding of many of these ideas that are thought to be harmful to an organization’s point of view or ideology. It is usually employed by a governing body.

There are many social, moral, and ethical issues regarding censorship. One issue is that censorship effectively limits the scope of thought of a government’s subjects. It hurts free speech and makes the citizens think in a way that the government is essentially forcing them to think like. This is quite obviously not ethical, as human minds are not meant to be shaped and the innovation of humanity is driven by free-flowing ideas and the ability to think for oneself. A government would go about issuing these restrictions by not allowing internet searches that use certain keywords and providing internet propaganda to limit the scope of thinking and to promote certain ideas that are beneficial to the agenda that the government is trying to promote. As has been mentioned a lot in this post, a government would want to use censorship if they want to hide a certain way of thinking from their subjects–for example, China using censorship to keep democratic ideas from spreading throughout the population.

Even if censorship in a country is wrong, I think that it is ethical for a company to respect the censorship requests of the country in which they are operating. It is not up to a company to make the laws and to deem what is right in a country, so I think they should follow governmental orders. That being said, I think that in cases such as China or North Korea where the censorship is especially bad, companies should refuse to run business in them if the censorship interrupts their daily business and look to move out of said country. I think that this is the most ethical way to deal with the problem because rather than trying to fight a revolution of sorts against the government, the company should just refuse service when it feels that its ethical standards are being compromised by the rules of censorship of a country.

By the same stretch of the imagination, I don’t think that it’s ethical for developers to provide tools to circumvent these restrictions. It is not up to developers to make laws–or in this case break them–that are set up by a government, no matter what they think is right or wrong.

I think that online censorship is a cause for concern in the coming years because of our reliance on technology. Tech companies need to stand up for what they think is ethical and right and work for freedom of speech by refusing to provide services if a country or government violates the company’s ethical code.

Artificial Intelligence

Artificial Intelligence is pretty easily defined as a machine or computer that is capable of doing some level of computation that could normally be done by a human. This could be as simple as making a decision based on a math comparison, or as complex as hearing a question and being able to formulate a coherent and accurate response. It is similar to what I consider to be human intelligence in the fact that it can take a problem, no matter how simple, and provide an answer or a solution to said problem. How it differs, in my opinion, is that it doesn’t have human reason. That is, computers and AI generally are unable to take in non-material components, such as compassion, ethics, and other human emotions that can sometimes factor into decisions. For example, if there’s an AI that makes business decisions based solely on numbers, it can’t take into account the ethics of huge business layoffs and the effects those have on employees; it can only look at the situation from a pure numbers perspective.

I think that AlphaGo, Deep Blue, and Watson are all proof of the viability and the power of AI. They each are programmed to perform a specific task, and have proven that they can do these tasks on par–and sometimes better–than human professionals in that same field. AlphaGo and Deep Blue were able to defeat world champions in Go and chess, respectively, and Watson was able to defeat past champions on Jeopardy. While these are certainly crowning achievements and show how far computer science, technology, and artificial intelligence have come, there is no reason to believe that the advancement of AI will stop there.

I think that the Chinese Room is a good counter argument for the Turing Test in the sense that I described above. An AI can never fully take in the human components that are a part of human intelligence, so therefore they are just a strong program, or a “strong AI” that is able to interpret certain events so that it can properly respond.

Related to this is my opinion that a computing system can never fully be considered a mind. There are too many intangibles, such as emotions, relationships, and the fact that different humans respond to different situations differently that factor into decisions for a system to fully be able to mimic human intelligence. In a sense, humans are biological computers because we take in certain factors to a problem or question and are able to use these factors to come up with an answer or a solution. The fact still remains, though, that humans are able to take in some factors that computers aren’t, making us a different breed of computers. The ethical implications of this are that we shouldn’t fully rely on computers to make our decisions for us. There are many ways in which they can help us come up with solutions and present the problem in a simplified way, but there are too many human elements involved in everyday decision making to allow computers and AI to fully take over.

Encryption (Project 3)

Whether or not encryption is a fundamental right for US citizens is a fine line to walk. A part of me wants to say that personal privacy is necessary for the people, but at the same time national security is nothing to take lightly and that sometimes means giving up a little of your privacy. Therefore, I think that I am leaning more towards saying that it is not necessarily a fundamental right. I think that personal privacy is important, but that does not mean that citizens are free from the eyes of the government. So, no, US citizens should not have a device with complete encryption that the government cannot see. That being said, I do not think that Apple should have to  create a master key for the government, because in that case the software could somehow be leaked to the public or fall into the wrong hands and then malicious hackers can do what they want with people’s phones and personal information.

Encryption is an important issue to me in the sense that I know how important secure connections and encrypted data is to keep things safe from hackers, but its not something that comes to my mind every day. Therefore, it does not really affect who I support politically and financially, although it definitely should. Cyber-terrorism is an increasing threat in today’s society, and companies and the government should be taking active steps to prevent cyber attacks from happening. So, encryption should be a huge issue when looking at who to back politically or financially.

In the fight between national security and personal privacy, I think that national security will ultimately win. We hear too many tragic stories every day about attacks or hacks that could have been prevented if the government had just had more access to look for buzzwords in emails, texts, and calls. With these stories, it’s only a matter of time before the government starts to interfere more and more with  personal privacy. While this is not something I’m particularly fond of, I’m kind of resigned to this fact. There’s a lot of reasons for the government to start tapping our personal devices more and more, and it’s pretty hard to argue with these reasons except that personal privacy is a right. Therefore, I think it’s only a matter of time until most of what we do electronically is monitored in some shape or form.

Letter to the Editor of The South Bend Tribune

Dear Editor,

 

Encryption is a major issue in today’s society, particularly with the prevalence of technology in our everyday lives. Yet, much of the public seems uninformed about what encryption means in terms of their security. People use their phones, laptops, and tablets constantly, but complain when Apple won’t provide a software “master key”  to help the government open up a phone. They don’t realize doing so would create an alley for hackers and others with malicious intent to see their texts, emails, pictures, and everything else on their devices. We’re not saying Apple is definitively right in refusing to make this software for the government, but we do think the public should be more informed about the encryption issue before jumping so quickly to conclusions spread about in the media for potentially millions to see.

 

Encryption has been vital to the maintenance of personal privacy since people began carrying most of their information in their pockets. For the last ten years or so, one simple network hack could leak millions of people’s credit card numbers, social security numbers, addresses, and more. Today, a single digital device may be the gateway to access the owner’s social, professional, and financial networks and correspondence. Smartphone features like GPS mapping can retrace the owner’s steps and create a timeline of their activities. Beyond the more sterile records, phones also hold photos and other private information that proffer a unique and quite comprehensive glimpse into the owner’s life and relationships. From scandalous leaks of nude images to emails, the public is very familiar with the dangers of these private accounts in the wrong hands. Access to all of this personal data is protected through careful security measures like encryption.

 

Encryption is a way of locking information so it can only be made readable by use of a particular key. A smartphone protected by encryption is much like a safe protected by an extremely sturdy lock. Only the owner of the safe possesses the key to unlock it and retrieve its contents. On iPhones, the data is encrypted with the user’s passcode as the key. As long as the phone is locked, so is all of its information. Each iPhone will accept a limited number of incorrect passcodes before disabling the phone for a period of time. By default, there is a cap on the number of tries allowed, however. If the incorrect attempts reach that limit, the phone may reset, and all of its data will be wiped.

 

What the FBI asked of Apple seems simple enough. They wanted a particular version of software written that would allow them to bypass the current incorrect-password limits. This would enable them to brute force the passcode on the phone by trying an exhaustive list of possible combinations until they manage to find the right one. The software would not automatically unlock the phone, just decrease its security so the FBI could breach it. As we’ve already noted, the amount and type of information contained on an iPhone is extensive. All of this data in composite could be extremely helpful to law enforcement investigating criminal activity. It would enable them to uncover plans and intentions, and recreate the suspect’s actions up to and during the time the crime was being committed. In this particular case, the FBI stood to learn a lot about the deceased suspect that could give them greater insight into the tragic San Bernardino shooting.

 

Apple asked the FBI — and the public — to look at this request from a security standpoint rather than a law enforcement one. Asking Apple to create software to weaken their encryption is, to return to our earlier simile, akin to asking the maker of a popular safe to create a master key that would unlock any safe of that brand. Creating that key would not, itself, be the problem. Apple is concerned about what it would mean that such a key would exist. Having the ability to break into any iPhone anywhere is no small power. It raises some important questions. Who would have control over this ability? Although the FBI asserted it would be a one-time use, once created, what is to stop its continued use? And perhaps most importantly, what would it mean for the security of iPhones everywhere if this vulnerability existed, since it would be sure to become a prime target for hackers?

 

So, it is in this light that we ask the public to consider the situation before jumping to conclusions and showcasing outrage over Apple’s decision. How would they feel if such a master key to phones existed? And how would they feel if this key got into the wrong hands and their own phone was one of the ones hacked? Are those risks worth the benefit to law enforcement? These facts and questions are important to consider before taking a stance on the issue, and we think much of the public needs to take a step back and think critically about their beliefs regarding the implications of Apple’s decision.

 

Sincerely,

Kevin, Tabitha, and Andrew

Reverse Engineering

The DMCA imposes some restrictions on whether or not something can be reverse engineered when it has a copyright on it. It says if there is a copyright on a certain program or piece of software, it is illegal to reverse engineer it. It forbids reverse engineering if circumvention of a protection is used. Some of the limitations it puts on copyrighted product users is that it can help to limit competition. It makes it legally gray whether or not a competitor can make something that works similarly because of the DMCA, which discourages a lot of potential competitors from even trying.

In terms of using DRM, I think that it is another gray area of the law on circumvention and reverse engineering. However, that being said, I think that it is ethical for companies to use these strategies to protect their property since they are the ones who made it. In my last post, I said that I didn’t necessarily think patents were evil because a company deserves to profit off of what they make, and DRM techniques are no different in my mind. That does make unethical in a sense for end users to circumvent DRM with these products since it is costing the producing company money. So, it is not moral to rip a CD or DVD from the physical media to a portable audio or video file because otherwise the company could have stood to make money off of a portable music file that they were selling as well, or this ripped portable file could be distributed to many other people that would have otherwise bought a CD or a DVD. In the same sense, it is not moral to remove the DRM from things in iTunes.

By the same thought process, I do not think that it is ethical to build tools that will allow end users to easily bypass copyrighted software and material. This helps end users to do things that they would otherwise not be allowed to do or would have to pay for, which stands to help the company lose profits off of their product. Therefore, car owners should not be able to circumvent the software in their cars because there is a good chance that the technology and software is there for their safety, and circumventing it could put them at a risk that the company had accounted for and tried to prevent. Phone owners should not be allowed to unlock their phones because then they could put stuff on there that they would otherwise have to pay the company for. And, in the same case of the car, unlocking the phone could put them at an unnecessary risk. In terms of researchers probing and reverse engineering to look for bugs and security flaws, I think that it ethical because the DMCA specifically allows for this type of reverse engineering, and the researchers are doing this for the good of the company and the end users rather than trying to rip the company off.

Patents

Patents have been around for a while and have been at the center of innovation in America for years. However, recently with the boom of software and new inventions seemingly coming every month, there are speculations being raised about the effectiveness of patents, especially against “software trolls.” Patents are essentially a right given by the government to an individual or an entity to have the sole to create and sell a certain product for a certain amount of time. This generally makes sense, as whoever invents the product should be the one allowed to profit off of it rather than have their idea stolen. However, the government has to be mindful of the ethical issues related to the patent. For example, if the product is something that could become relatively needed for ordinary people, issuing a patent could create free license for an individual to create a monopoly of sorts on a certain invention.

In my opinion, patents should be granted because if they weren’t, a culture of non-innovation would exist. People would have little to no incentive to invent new things, which would stall technological advances. So because they help this culture of inventiveness, they are beneficial to society. If they didn’t exist, odds are that many inventions would create profits for someone other than the inventor. Then, people would sit back and wait for other people to do the hard work and invent something new and then just try to jump in and profit on it.

Patents given out to software inventions are harder to see the ethical and moral value of and are therefore harder to give out, but I do think they should be given out, although extreme caution should be used. The reason that software is harder to given patents for is that the development of new and improved software is so rapid, a new software invention could become commonplace and needed across many systems just a few months later in order to continue getting better. In this case, in which a patent can stretch for one or several years, the entity with the patent would be the only one able to sell vastly superior products running the new software invention. Therefore, I think caution should be used to only give patents to certain facets or features of software rather than the underlying main part of the software. This would allow companies to be able to use the core technology but also be able to differentiate themselves with their own features. However, this differentiation between mainframe and side technologies can sometimes be hard to tell when it is first invented.

I think that the existence of patent trolls shows that the system is working. Although the trolls usually get a lot of their patents from bankrupt companies and then charge licensing rights, they are the rightful owner of the patent. And had the original company not gone bankrupt, they would have been getting compensated for their invention, which is right. Therefore, I think that although the trolls didn’t invent the invention, it shows that someone is getting compensated for an invention, which is the way it should be.

Cloud Computing

Cloud computing is a powerful way for companies to sift through and make use of the incredible amount of data that is floating around in our world today. In the last few years, cloud computing has gone from being the future of how we compute to being the now of computing. If a company doesn’t make use of cloud computing, it seems like they will be a step back from their competition. That being said, there are still ethical concerns with using cloud computing. In my experience, the ethical element that concerns me the most is the data safety. The data travels through many layers of computers, and in many cases, through the possession of many companies and third-party servers. This leaves the ethical dimension of how these third parties should handle the data and how they should go about with the data security. There are beliefs, and to some extent validation, that third parties wouldn’t do their absolute best to provide optimal security since it is not their data that they are protecting. They will instead provide good security that works well for the client, but may not be the most secure that they could provide.

Cloud computing is a term thrown around a lot lately, but many people don’t know what it is exactly. Cloud computing is the use of off-premises servers to offload data storage and data consumption. This helps to reduce server maintenance costs and, as the amount of data increases, helps to keep the number of on-premise computers to a minimum.

As a developer, there are many advantages and disadvantages to the cloud. Some advantages include the ability to do a large amount of computing with very little computing power in house and the fact that other people at a third-party company deal with the set up and maintenance of the servers. Over the summer at my internship, I got to develop using a cloud environment. I was able to process a very large data file that I was given in a very little amount of time – about 30 minutes. If I were to run the same data analysis on the computer I was using in the office, it would’ve taken about a day. As for the disadvantages side, the obvious one is the data security that was mentioned earlier. Another one is the fact that server set up is done by the cloud services provider. This means that it would be harder and more expensive if you have a need for a customized system since the providers usually only sell “cookie-cutter” type servers.

As a consumer, the biggest advantage is the speed of the cloud. For example, in my experience, Amazon is able to handle a lot of consumers at any given time, and provide a good experience for them. The disadvantage of using the cloud as a consumer is once again the security. Many customers keep their credit card information with Amazon, for example, and they are relying on Amazon to keep this information safe and secure.

Edward Snowden

Edward Snowden is a very polarizing figure for many people–they either love him or they hate him. To me, I don’t have a particular opinion on him, that is to say, I don’t see him as a hero or as a traitor. If I had to choose though, I would say that he’s a traitor  because he violated his NDA with the government and he revealed information that could have been tied national security. That being said, I don’t think any of this was super surprising since the public could generally figure out what the NSA would do, and I think that info may have become public eventually anyway. But, back to me choosing traitor over hero, it naturally follows that I think that the United States should pursue extradition and prosecution for treason for Mr. Snowden since he broke contractual agreements with the CIA.

Edward Snowden leaked NSA and CIA documents that showed that the NSA had gained access to public phone records as well as had bugged certain rooms and keyed in on certain people’s phone calls. He did this by revealing the documents to several news outlets, most notably the Guardian, so that they could report how the NSA was breaching privacy. I think that the ethical aspect of his actions are certainly up for debate, as there are two main sides to it: He told the public people what was happening to their privacy, but he also broke a contract that he had signed with one of the most secretive agencies in the world. As for the latter, I think that that was completely unethical and that signed, legal promises need to be kept to keep professional trust. As for the other part, I agree that the public needed to know what was happening behind the scenes, but at the same time there’s a reason the NSA exists, which is to promote national security. They need to be able to do their job to help thwart many dangerous attempts on the nation.

Ultimately, I would lean more towards Snowden having harmed the security of the United States and our allies. All in all, as The Washington Post article pointed out, the public’s opinion of privacy didn’t change to much altogether, which leads me to believe that the public wasn’t completely oblivious to what the NSA was put in place to do: “But a lot of times they just told us that the NSA was doing pretty much what you would have guessed they were doing.” Therefore, Snowden’s revelations basically just confirmed what many people already thought, and in the process destroyed some US relationships and security measures. These revelations haven’t impacted me personally too much, because in my short technological career, what Snowden revealed was pretty much accepted, and there has always been the threat of hackers. Therefore, electronic communication has always occurred to me as something that I should assume the worst for (that someone is listening or using certain info), and my feelings about encryption are in the same boat. There’s always the possibility that anything done electronically is not safe, whether by hacker or by government, so it’s always better safe than sorry.