• Skip to primary navigation
  • Skip to main content
  • Skip to footer
  • Home
  • About
    • Editorial Staff
      • Bryan Strawser, Editor in Chief, Strife
      • Dr Anna B. Plunkett, Founder, Women in Writing
      • Strife Journal Editors
      • Strife Blog Editors
      • Strife Communications Team
      • Senior Editors
      • Series Editors
      • Copy Editors
      • Strife Writing Fellows
      • Commissioning Editors
      • War Studies @ 60 Project Team
      • Web Team
    • Publication Ethics
    • Open Access Statement
  • Archive
  • Series
  • Strife Journal
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
  • Contact us
  • Submit to Strife!

Strife

The Academic Blog of the Department of War Studies, King's College London

  • Announcements
  • Articles
  • Book Reviews
  • Call for Papers
  • Features
  • Interviews
  • Strife Policy Papers
    • Strife Policy Papers: Submission Guidelines
    • Vol 1, Issue 1 (June 2022): Perils in Plain Sight
You are here: Home / Archives for Facebook

Facebook

Offensive Cyber Series: Dr Daniel Moore on Cyber Operations, Part I

June 10, 2021 by Dr Daniel Moore and Ed Stacey

Photo Credit: dustball, licensed with CC BY-NC 2.0

On Wednesday 10th March, Strife Interviewer Ed Stacey sat down with Dr Daniel Moore to discuss the operational side of offensive cyber. For part two of Strife’s Offensive Cyber Series, Dr Moore expands on his thinking about presence-based and event-based offensive cyber operations and discusses related topics such as the emergence of new organisational cyber structures, allied operations on networks and his upcoming book Offensive Cyber Operations: Understanding Intangible Warfare, slated for release in October 2021.

Ed Stacey: Danny, you have written in the past about distinguishing between presence-based and event-based offensive cyber operations. What are the key differences between the two?

Danny Moore: I came up with the distinction between presence-based and event-based operations as a commentary on the lack of distinction in most of the publicly accessible cyber doctrine documentation. Mostly what we see are offensive cyber operations treated as a uniform spectrum of possibilities that have the same considerations, the same set of staff associated with them and the same set of circumstances under which you would want to use them. But that is not the case.

A lot of the literature you see focusses on the technical deployment of offensive cyber operations – the malicious software involved in the process, the intended effect, what it means to pivot within a network – but that really only encompasses a fraction of the activity itself when we are talking about military-scale or even intelligence agency-scale of operations, at least where it counts. So I came up with this distinction to differentiate between what I think are two supercategories of operation that are so different in the circumstance, and so unique in how they would be utilised, that they are worth examining separately because they have distinct sets of advantages and disadvantages.

Presence-based operations are like the classic intelligence operation that has an offensive finisher. So you have everything that you normally would with an intelligence operation, including compromising the adversary’s network, establishing a foothold, pivoting within and gathering relevant information. But then there are additional offensive layers too, such as looking for the appropriate targets within the network that would yield the intended impact and weaponizing your access in a way that would facilitate achieving the objective. For example, would you need dedicated tooling in order to have an effect on the target? Or say you are looking to have a real-world, physical impact or even adversely degrade specific types of software and hardware, which would require significant capabilities. But crucially, the operation is managed over the period of at least many weeks, if not months and sometimes even years. And it can be a strategic set of capabilities that you would use possibly even just once, when needed, because once exposed it is likely to be counteracted, at least in the medium-term.

Event-based operations are completely different in that sense. They are the most robust equivalent that you could have to a proper weapon, in the military sense of the word. It is intended to be something that you can bundle, package up and deploy in multiple circumstances. Imagine – and I think this is the most helpful analogy – it is almost an evolution of electronic warfare, something that you can deploy on a ship or with a squad or even within an existing air defence grid. What it does is, instead of just communicating in electromagnetic signal, it also attempts to facilitate a software attack on the other side. And that sequence involves a completely different set of circumstances. You do not need to have an extended period of intelligence penetration of the network that you are targeting – that contact is likely to be minimal. Instead, what you have is an extensive research and development process where you collect the right technical intelligence in order to understand the target, craft the actual tool and then make it much more robust so that it can be used multiple times against the same or equivalent targets and not be as brittle to detection, so stealth is not really a component.

So that distinction is just a high-level way of saying that the circumstances are different, the types of manpower associated are different, but also that there are unique advantages and disadvantages when using each.

ES: What sort of benefits do states and their militaries and intelligence agencies gain by making this distinction?

DM: If you acknowledge these differences at a strategic and doctrinal level, it facilities much better planning and integration of cyber capabilities into military operations. As you know, there is a constant tension between intelligence agencies and their equivalents in the conventional military around how offensive cyber capabilities are used. The question here is: how close is the relationship between the intelligence agency – which is the natural owner of offensive cyber capabilities, for historical reasons and usually a strong link to signals intelligence – and the military, which wants to incorporate these capabilities and to have a level of predictability, repeatability and dependability from these activities for planning purposes? That tension is always there and it is not going away entirely, but how this distinction helps is to group capabilities in a way that facilitates better planning.

If you have a supercategory of operation that relies heavily on intelligence-led penetration, pivoting and analysis, for example, that comfortably lives with the extreme assistance of an intelligence agency, if not actual ownership – and that will vary between countries. Whereas the more packageable type of capability is easier to hand-off to a military commander or even specific units operating in the field. It is something that you can sign off and say: this will not compromise my capabilities in a significant way if it is used in the field incorrectly, or even correctly, and gets exposed in some way, shape or form. So it is about different levels of sensitivities, it is about facilitating planning and I think it takes the conversation around what offensive cyber operations actually look like to a more realistic place that supports the conversation, rather than limits it.

ES: Focussing on the organisational tensions that you mentioned, new structures like the UK’s National Cyber Force (NCF) are emerging around the world. What are the operational implications of these efforts?

DM: The short answer is that the NCF is an acknowledgement of a process that has been happening for many years. That is, the acknowledgement that you need to build a bridge between the intelligence agency, which is the natural owner of these capabilities, and the military, that wants to use them in a predictable and effective way. So you are seeing outfits like this come up in multiple countries. It allows for more transparent planning and for better doctrinal literature around how cyber capabilities integrate into military planning. That is not to say it will fix everything, but it decouples the almost symbiotic relationship between intelligence agencies and offensive cyber operations.

Intelligence agencies will always play a significant part because, as I said and have written about as well, they have an important role to play in these types of operations. But we have matured enough in our understanding to be able to have a distinct, separate conversation about them that includes other elements in military planning that do not just draw from intelligence agencies. So the NCF and other equivalent entities are an acknowledgement of the distinctness of the field.

ES: This next question is from Dr Tim Stevens, who I spoke to last week for part one of this series. Will NATO allies follow the US’ lead and adopt a posture of persistent engagement in cyberspace? And just to add to that, if they did, what sort of operational challenges and opportunities would they face in doing so?

DM: The conversation around the US’ persistent engagement and defend forward mentality for cyber operations is one that is ambivalent and a little contentious, even within the US itself – whether or not it is working, whether or not it is the best approach and, even, what it is actually trying to achieve. If you read the literature on this, you will find many different interpretations for what it is actually meant to do. So will NATO or specific member states choose to adopt elements of this? Possibly. But it is unlikely to manifest in the same way.

The perception from the US that they are in constant competition with their adversaries in and against networks is accurate. We have increased friction as a result of how the internet is structured and how sensitive networks are structured. You consistently have to fend off adversaries and seek to engage them, ideally outside your own networks – a good concept to have and a good operational model to keep in mind. And I think it is a great way to educate military leaders and planners around the unique circumstances of operating against networks. That said, I do not know if NATO is going to adopt wholesale persistent engagement and defend forward or rather just incorporate elements of that constant friction into their own models, which I think is a necessary by-product of engaging networks.

Some of the countries within NATO are more prolific than others when it comes to such activities – the UK, for example, or even France. Obviously, countries run offensive cyber operations of their own: they consistently need to fend off adversaries from their critical infrastructure and they prefer not to do this by directly mitigating incidents within their own network. So the step of persistent engagement and defend forward does make sense, but I do not know if that is an adoption of the same doctrine or just some of the principles that it looks to embody.


Part II of this interview will be published tomorrow on Friday 11th June 2021.

Filed Under: Blog Article, Feature, Series Tagged With: Cyber Operations, daniel moore, Dr Daniel Moore, Facebook, offensive cyberwarfare, offensive cyberwarfare series

The Rise of Digital Propaganda – An ‘Alt-Right’ Phenomenon?

January 22, 2020 by Tom Ascott

by Tom Ascott

Co-founder of Breitbart News Steve Bannon described the news website as a platform for the alt-right (Image Credit: Wikimedia)

Without social media, the alt-right would not exist, Donald Trump would not be president, and the UK would not be leaving the European Union. As the American Sociological Association put it ‘the rise of the alt-right would not be possible without the infrastructure built by the tech industry’. Social media is becoming the most important way for political campaigns to reach out to potential voters, and online misinformation campaigns use coordinated inauthentic activity to subtly manipulate citizens. It is the fastest and can also be the cheapest way of targeting an audience, much more so than door to door campaigning or flyering.

The alt-right isn’t simply more popular online than the left. In fact, there are far more left-wing political blogs, and blog readers often skew left-wing. Right-wingers tend to engage less with political discourse online and, when they do, they are more likely to be bi-partisan. Despite that, the alt-right is far more successful online when they do engage.

The Success of Alt-Right Activity

Right-wing political groups have had a significant impact on international affairs through their online activity. By successfully using data harvesting, micro-targeting and meme warfare, they have sent out tailored, political messages to individuals or small groups, which are never seen by others. The messages leverage the data they have mined to be as effective as possible. It may appear unusual that there has been no left-wing equivalent of the Cambridge Analytica scandal – and it could be quite a while before we see the emergence of such – but it will be crucial to understand how the left might channel such activities.

The closest we have seen to a left-wing version of Cambridge Analytica is Project Narwhal, the database that the Obama team built in 2012. Project Narwhal started by slowly and manually joining discrete databases, each with a few data points on a single voter, to build their profile. Years later those profiles had grown, and the project had 4,000– 5,000 data points on each American voter. Looking back at the ways the media fawned over Obama’s data strategy, it is not a surprise that the right took the ball and ran with it.

It is an anomaly that the alt-right thrives online. Identification can be risky for the alt-right. Those who are seen and identified attending rallies can lose their jobs or face other repercussions. Extreme-right opinions that are clearly racist, sexist or xenophobic can lead to users being blocked on mainstream platforms, so these users begin to ‘join smaller, more focused platforms’. Alt-right figures Alex Jones and Milo Yiannopoulos were banned from Facebook because they ‘promote or engage in violence and hate’. Laura Loomer, an alt-right activist, was banned from Twitter for tweeting at Ilhan Omar that Islam is a religion where “homosexuals are oppressed… women are abused and… forced to wear the hijab.”   As a result, the alt-right has become more digitally agile, using tools to exploit larger platforms and reshare their views. Platforms like Gab have a much higher rate of hate speech than Twitter. Discord has also been used to radicalise and ‘red pill’ users towards extreme-rightist beliefs.

The tools of the alt-right represent tools for disruption. It is only by disrupting the status-quo that Breitbart founder Steve Bannon believes that the alt-right can break into the political spectrum. These tools can be used to persuade or dissuade; Pro-Publica found that adds targeting liberals often urged them to vote for candidates or parties that did not exist.

The Left’s Slow Response

One reason why left-wing political parties have not used similar tools is exactly that conflation of such activities with the alt-right. Though there is plenty of dissent in left-wing politics over how centered or left-leaning it should continue to be, groups from the left simply do not identify as alt-left. Cambridge Analytica has offered the alt-right a chance to disrupt the right-wing, but there is much less desire to disrupt on the left. Instead of a true alt-left there is only ‘an anti-Alt-Right‘. Bannon believes that Cambridge Analytica, and the chaos it created, was a tool that the right-wing needed in order to survive. The ability to harvest data and use it to target specific individuals with political messaging appears to be a content-neutral process.

Any organisation could have done it, but the first to do so was Cambridge Analytica. It was an act of ‘evil genius’ to find individuals who weren’t motivated enough to engage in politics, target them with personalised messages and convert them to their specific brand of right-wing thinking, or to urge left-wing voters to disengage. It is hard to assess how prevalent online misinformation campaigns are. Groups will use neutral-sounding names, mask the political nature of their ads, or identify as partisan. Their only aim, however, is to confuse or dissuade voters.

Consequences for Social Media Platforms

The first-comer has it the easiest and copying the process will be extremely difficult. Following the scandal, the infrastructure for data harvesting has started to be regulated. The UK Information Commissioner’s Office (ICO) was granted new powers in the Data Protection Act 2018 and the European Union introduced its General Data Protection Regulation in response to the scandal. Facebook has been forced to refine its policies on data sharing and, as a result, new data from the platform is less available now than previously.

After the scandal broke, the platform started to audit data that apps could collect and began blocking apps that continued to take users’ data. As Mark Zuckerberg’s continued appearances in front of Congress show, if Facebook will not regulate itself, then perhaps it will be broken up. Where anti-trust laws may seek to punish companies for harming the consumer, it will be hard to penalise Facebook. Users continue to opt-in, voluntarily hand over data, and enjoy time browsing their personalised, if pyrrhic, feeds.


Tom Ascott is the Digital Communications Manager at the Royal United Services Institute. You can find more of his articles here.

Filed Under: Blog Article, Feature Tagged With: activity, alt-right, analytica, bannon, cambridge, Elections, Facebook, influence, left-wing, memes, online, Politics, Tom Ascott, Voting

Feature — Winning the Disinformation War Against the West

May 12, 2019 by Andrzej Kozłowski

By Andrzej Kozłowski

13 May 2019

The Ministry of Defence badge on a computer chip. Britain will build a dedicated capability to counter-attack in cyberspace and, if necessary, to strike in cyberspace. (Crown Copyright/Chris Roberts)

The rapid expansion of the Internet in the nineties encouraged the expectation among Western politicians and experts that liberal democracy would come to dominate the world and authoritarian regimes would slowly collapse. It was hoped that the easy and fast access to uncensored information would strengthen civil society and opposition in authoritarian countries by empowering a free press, facilitating the planning and organisation of social and revolutionary movements which would overwhelm the  ruling governments. However, things took a different trajectory and Internet tools such as social media have become a double-edged sword, effectively being employed against democratic countries to wreak information havoc and spread propaganda to undermine democratic processes.

A more serious problem than we think

The key event which demonstrated the power of social media, was the presidential election in the United States in 2016, when Russian hackers from the Main Directorate of the General Staff of the Armed Forces of the Russian Federation (GRU) and the Federal Security Service of the Russian Federation (FSB), along with trolls from the Internet Research Agency, engaged in a disinformation campaign to influence the outcome of the election. Their main aim and the impact on  Trump’s  victory are  disputed. However, this incident showed the importance of the Internet and social media and how easily public opinion can be manipulated by the newest technologies. Since that moment, policymakers and chiefs of intelligence and counterintelligence of NATO and EU countries have warned about the potential threat of external meddling in other elections in the West. Indeed, Russia attempted to meddle in the elections in France, the Netherlands and Germany but did not achieve an outcome comparable to the American presidential election of 2016.

Not only has  disinformation been used to influence election processes, it has also been deployed to split societies by drawing attention to the most controversial cases. The waves of immigrants, which came to Europe in recent years, have divided societies of Western countries. This division has been strengthened by the fake stories of grave crimes committed by  immigrants. A lot of people believed in them and were upset by this immigrant behaviour, leading some towards feelings of vengeance. Here, disinformation contributed to the violent acts against immigrants but also increased distrust in  the mainstream media and towards politicians who seem to have overlooked these events. Social media were also used to influence voter behaviour in important referendums like the one in Spain and the United Kingdom, supporting Brexit and the secessionist movement in Catalonia.

Last but not least, the anti-vaccination movement, strongly present on social media, poses a threat for the lives of citizens in the West. As a result, illnesses like measles, which had formerly been eliminated by vaccines, have reemerged. The latest research shows that this movement was not spontaneous  but rather  state-inspired and strongly promoted on the Internet.

Western institutions have identified Russia as the perpetrator of these campaigns, but facing up to the problem of disinformation has become one of the most crucial challenges.  However, there is a high probability that other countries could follow or have already followed in Russia’s footsteps. The West needs to prepare by building a resilient society resistant to disinformation and propaganda and ready to deter potential foes.

Front page of European Commission’s “Final Report of the High Level Expert Group on Fake News and Online Disinformation” (European Commission)

Building an information-resilient society

Building an information-resilient society requires the close cooperation of four main entities: the government, civil society, social media platform owners, and the traditional media.

Governments

Despite the growing role of the private sector in cyberspace, the government ought to play a crucial role in initiating and coordinating actions that counter disinformation. First and foremost, the government needs to engage with professionals in combating this phenomenon. Think tanks and non-profit organisations cannot resist information and psychological operations orchestrated by the professionals from secret services due to their lack of financial sources, access to sensitive information, advanced systems of early warning and a sufficient staff size. However, some of these abilities and tools are in the hands of the counterintelligence agencies, which ought to assist such think tanks.

Moreover, the government needs to dominate the information sphere before elections and referendums. The constant warnings about the potential interference and manipulation of public opinion ought to come from the heads of intelligence and counterintelligence. Some may claim that spreading panic would be counterintuitive. Yet embedding a form of vigilance and awareness similar to that which occurs on, say, April Fool’s Day, about potential disinformation is crucial and can strip potential assailants from their biggest advantage: surprise. The cases of France and Germany are particularly telling. Before the elections in both countries, politicians and secret service officers warned about potential manipulations, which during the elections themselves were restrained to a minimum.

The government also ought to prepare  a clear legal framework to help social media bigwigs eliminate detrimental content from their platforms. These laws ought to be effective and feasible but also remain adaptable to technological reality and transparent in order to avoid accusations of political bias. Internet-users ought to be aware that they  can be penalised for  inappropriate behaviour and not for political views.

The next task of the government is to prepare politicians and administrative staff for possible disinformation campaigns. It should be done on two levels: by organising training and practice for politicians and civil servants on how to recognize disinformation on the Internet, and by ensuring that political parties are prepared, especially during election campaigns.

Governments  should also not hesitate to ban certain media from attending official press conferences if it has been established that these media act as propaganda instruments . For example, during the election in France, the French government’s administration of Emmanuel Macron banned the Sputnik and Russia Today journalists, limiting the freedom the media to spread disinformation.

The social media enterprises

Social media are used as tools to spread disinformation and influence democratic processes in many countries. It has become a significant problem for their executives, especially for Twitter and Facebook. Particularly after the 2016 presidential election in the United States both companies were under considerable public criticism. In response, they heavily invested in eliminating fake content and accounts responsible for spreading disinformation. This policy should be continued in close cooperation with government entities, which should help by identifying hostile accounts. However, the decisions made by social media enterprises should be clearly explained to avoid accusation of censorship.

Civil Society

The role of non-profit organizations cannot be underestimated, but they ought not play a  central role in fighting disinformation. Instead, they ought to help government and social media enterprises identify propaganda and fake content but their role ought to remain advisory. Such organisations could effectively set up educational campaigns, teaching citizens how to avoid disinformation by fact-checking news and content on the Internet.

The Media

In the past, traditional media played the role of gatekeepers by filtering the flow of information and eliminating fake news. Currently, in the era of social media and direct access to information, this role has changed. However, traditional media still has a role to play. They ought to create special roles  in the editorial team to trace fake news and stories and reveal it to the public. It would give them back the role of modern gatekeepers in the new era of  social media. Furthermore, journalists are among the most popular persons on Twitter and Facebook and are often a source of news and information. If journalists spread fake news intentionally or unintentionally, this fake news becomes more and more reliable. Therefore media should organize special courses and training to  raise awareness among journalists about appropriately sourcing information.

Last but not the least, the government needs to coordinate the efforts of all entities engaged in fighting disinformation. If the government fails  in this role, the system will not work as one cohesive entity, but there  will be a constellation of single, loosely related entities with overlapping tasks and lack of resources.

Creating effective and reliable deterrence

Building a society resistant to disinformation is a part of an effective strategy to fight disinformation. The remaining task is to deter potential agents of disinformation by establishing punishments. These penalties ought not be limited to cyberspace, but may also consider other measures, such as economic sanctions. In most cases, it is difficult to respond to these agents of disinformation by proportional information campaign. The obstacle lies in the authoritarian nature of the aggressor.

Because the election process in authoritarian states serves as a mere formality where public opinion and society cannot be effectively influenced, approaching aggressors with economic sanctions might be more effective in deterring such actions. However, even considering the authoritarian nature of the regime, online activity ought to be considered. The possible options could include demonstrating to the home population the inherently corrupt nature of the regime, under which the average citizen lives under inadequate conditions. The next potential strategy refers to the example of “The Union of the Committees of Soldiers’ Mothers of Russia”, a former Soviet organisation that influenced the attitudes of Soviet-Russian society towards the Afghan world. A similar scenario could be used to reveal the number of troops killed in the wars in Syria or in the Ukraine. Thirdly, it is a good idea to create the Russian version of Wikileaks service, that would deliver materials compromising the Russian materials and put them on the a bulletproof website.

Economic sanctions are another powerful tool that can be used by the West. Such sanctions have been surprisingly effective against Russia. Freezing oligarchs’ assets or introducing travel bans can hurt  the closest circle of Moscow’s cronies and stop them from visiting their luxurious residences in Western Europe.

The next powerful  tool of punishing Russia for its aggressive behaviour in this domain would be to expel Russian diplomats from Western countries. At first glance, it may look like the standard  retaliation in the international arena. Considering the fact that in some countries like the UK, half of the Russian embassy staff  worked for the Russian intelligence services, expelling Russian diplomats could effectively paralyse the work of the Russian intelligence network.

Every kind of Russian interference in the Western infosphere should be met  with one of  these effective measures  Such measures would also deter any other country from following Russia. The West needs to demonstrate the willingness and determination to punish agents of disinformation, who have tried to infiltrate its own Internet sphere.

Conclusion and recommendations

The key to win the disinformation war is, first and foremost, to treat it as an existential threat perceived as a strategic priority. Thus significant financial resources need to be invested to counter this problem. Success is determined by the resilience of society and reliable forms of deterrence. Both require effective cooperation among the government, traditional media, social media enterprises and civil society; professional government agencies should be included in fighting against disinformation.

Effective cooperation among these entities allows us to create a warning system, which is crucial because opponents benefit from the element of surprise. Therefore, every user of the Internet, from government clerks to journalists, has to be educated to raise awareness of information threats. There should be a transparent legal framework, which helps to eliminate the disinformation from the public sphere without being accused of political bias. However, building a resilient society is not enough– forms of deterrence are also required. Such deterrence consists of a variety of measures, extended beyond the information sphere.

Flagging certain media outlets as propaganda instruments and banning their journalists from attending press conferences is the next step.


Dr Andrzej Kozłowski is the editor-in-chief of CyberDefence24.pl, the biggest portal on cyber security and information warfare in Poland. Alongside with his work as a journalist, he is a lecturer at the University of Lodz, Collegium Civitas in Warsaw and European Academy of Diplomacy (EAD). In 2016, Dr Kozłowski successfully defended his PhD dissertation entitled: “The Security Policy of the United States in Cyberspace (1993-2012). Comparative Analysis”.  He is an expert at several Polish think-tanks such as The Institute of Security and Strategy Foundation, Warsaw Institute For Strategic Initiatives  and  The Casimir Pulaski Foundation.


Image source: Flickr

Filed Under: Blog Article, Feature Tagged With: Civil society, Disinformation, Facebook, Fake News, Hybrid warfare, Immigrants, Instagram, Russia, social media, Trump, Twitter, West

Footer

Contact

The Strife Blog & Journal

King’s College London
Department of War Studies
Strand Campus
London
WC2R 2LS
United Kingdom

blog@strifeblog.org

 

Recent Posts

  • Climate-Change and Conflict Prevention: Integrating Climate and Conflict Early Warning Systems
  • Preventing Coup d’Étas: Lessons on Coup-Proofing from Gabon
  • The Struggle for National Memory in Contemporary Nigeria
  • How UN Support for Insider Mediation Could Be a Breakthrough in the Kivu Conflict
  • Strife Series: Modern Conflict & Atrocity Prevention in Africa – Introduction

Tags

Afghanistan Africa Brexit China Climate Change conflict counterterrorism COVID-19 Cybersecurity Cyber Security Diplomacy Donald Trump drones Elections EU feature France India intelligence Iran Iraq ISIL ISIS Israel ma Myanmar NATO North Korea nuclear Pakistan Politics Russia security strategy Strife series Syria terrorism Turkey UK Ukraine United States us USA women Yemen

Licensed under Creative Commons (Attribution, Non-Commercial, No Derivatives) | Proudly powered by Wordpress & the Genesis Framework