close

Checkpoint Total Endpoint Security Package

April 8, 2016 Instant messaging service Whatsapp has now announced that it will use end-to-end encryption to scramble all users’ communications and ensure they can only be decrypted by the recipient’s device. This has huge implications for intelligence agencies as we are only too aware following the FBI/Apple debate around the San Bernadino gunman’s iPhone. Indeed, public opinion is generally divided over end-to-end encryption although security experts around the world are reluctant to weaken encryption mechanisms to allow security agencies to read communications. Here to comment on this news is Richard Anstey, EMEA CTO at .  Richard Anstey, EMEA CTO at : “This announcement by WhatsApp reflects a growing consumer awareness of the purpose and merits of encryption. It’s a win for privacy advocates, but undoubtedly a cause of frustration to governments across the world. Following the Apple/FBI scandal, and the return to prominence of the Snooper’s Charter in the UK, encryption has beenpushed into the mainstream despite encryption algorithms having been around for years. End-to-end encryption is a very simple concept: as soon as a message leaves a sender’s device, the characters are scrambled into a series of letters and numbers which mean nothing to everyone except the recipient who holds the only key that can now interpret the message. “End-to-end encryption is already posing a problem for intelligence agencies which are pushing for “backdoors” to decrypt messages between terrorists, some of which may be exchanged on WhatsApp. However, security experts across the world – including myself – are very reluctant to weaken encryption mechanisms, because this would have a wider knock-on effect in day-to-day life – both personal and professional. It can cause all sorts of sensitive information to become less protected from hackers, criminals and unfriendly nation states.”     About Intralinks (NYSE: IL) is a leading, global technology provider of secure enterprise contentcollaboration solutions. Through innovative Software-as-a-Service solutions, Intralinks software is designed to enable the exchange, control and management of information between organisations securely and compliantly when working through the firewall. More than 3.1 million professionals at 99% of the Fortune 1000 companies have depended on Intralinks' experience. With a track record of enabling high-stakes transactions and business collaborations valued at more than $28.1 trillion, Intralinks is a trusted provider of easy-to-use, enterprise strength, cloud-based collaboration solutions. Comments are closed
April 7, 2016 Malware continues to become a growing and increasingly costly risk to mobile users today, with one in every 30 mobile browsing transactions, and one in every seven mobile app sessions proving to be potentially harmful. In fact, roughly 5.9 percent of subscribers encounter a risky website every day and are transmitted through URLs and mobile apps that mobile users access daily according to our . Even more concerning is that teens and children populations are especially vulnerable as the proliferation of mobile devices, online and app activity increase dramatically. And because mobile is ingrained in all we do and how we live, it’s become increasingly difficult to identify and mitigate the growing volume of attacks targeted at this vector. While there are vendors out there who represent various parts of the ecosystem and focus on everything from mobile device management (MDM) to endpoint security, communication service providers (CSPs) are in a unique position in theindustry because they are at the heart of the digital experience and can stop threats at the network level. CSPs have access to a goldmine of network user data that can be used to better understand a range of user profiles when it comes to risky behavior. When armed with relevant data, CSPs can gain insights into who might be most susceptible to engaging with sites that may contain malware, spyware or phishing scams, and intervene with network-based solutions that can minimize that user’s specific risks. By offering network-based security services, CSPs have the opportunity to provide added value to their subscribers and protect users based on their personal mobile habits and behaviors. At the same time, they gain a unique opportunity to monetize the network, increase ARPU and even reduce churn. What’s the big deal? In large part, mobile security is an afterthought for consumers and business people who don’t have the time to manage multiple subscriptions, update to the latest softwareversion or worry about where they click (even if it appears to be from someone you trust). As opposed to the case for fixed networks, while some regulators already require mobile operators to provide basic security against mobile malware, a large majority do not. And while every mobile user is at risk of security threats, no two users are alike in their risky behavior and in turn, the security measures needed for them to remain safe. What user profiles are at the greatest risk? We found that on average, mobile subscribers have about 72 interactions on three different websites on any given day. Whether it be a social networking platform, a trending game, news application or e-commerce website, every time a user touches content on a website or mobile app, they’re leaving themselves vulnerable to attack. The key to understanding who is at risk is the ability to accurately identify profile groups that represent common mobile user perceptions, expectations and behaviors. Segmenting mobilesubscribers by demographics and usage classifications can help CSPs to determine the types and level of security risks each unique customer might encounter within the network as they go about their typical daily business. When you get down to the data, there are some interesting trends around which profiles are at greatest risk – and it might not be who you most expect. According toconducted by Allot Communications, business people display the riskiest online behavior, with 79 percent of businessmen and 67 percent of businesswomen utilizing potentially risky mobile apps on a daily basis. These numbers are followed closely by youths and millennials, 67 percent of which also access questionable apps on a regular basis, putting their mobile devices and personal information at risk. While mobile app downloads are oftentimes protected, their outgoing use is not, fooling certain users into believing they are accessing harmless apps when in truth, they are leaving themselves susceptible tomobile threats with each and every use. Take clicking a link on a social site like WhatsApp for example; while the app download itself is protected, accessing that outside link may not. Why is this important? More and more, CSPs are faced with the task of keeping their subscribers secure from the oncoming slew of cyber threats that continue to increase both in size and sophistication. Fortunately, CSPs can be highly effective when it comes to halting cyber attacks. In the face of widespread, emerging, and more persistent online threats, operators can utilize subscriber data to protect users from malware and other Internet-borne threats that can harm reputation and productivity, damage mobile devices, comprise personal data, and cause financial loss. When armed with relevant data and information surrounding customer behavior — for example, knowing if the user is a business woman on the go or a child accessing educational apps — CSPs are able to engage with subscribers to identify how tominimize their specific security risks. With the insider knowledge available through subscriber data comes the ability to offer individualized security services to protect subscribers from harmful malware. CSPs can provide services anywhere from network-based anti-malware to parental controls to protect consumers against cyber attacks that can cause the loss of personal and professional content. For example, rather than providing security per app, safeguarding users at the network level allows security measures to provide a protective blanket for all mobile online activity. With access to a user’s unique mobile preferences and use cases, and the ability to analyze each individual, CSPs are better positioned than ever to protect their subscriber base. This not only secures the users themselves, but also gives CSPs a competitive advantage over other providers that may not be utilizing this critical user data to fight off threats to user privacy and content. By analyzing network data,filtering users into highly targeted categories, and offering network security that provides an umbrella over users’ complete online activity, CSPs are given a major advantage when it comes to thwarting off cyber crime in their networks and keeping users consistently protected in the face of malware. About Yaniv Sulkes Yaniv Sulkes is a telecommunications professional engaged in designing, developing, productizing and marketing industry leading solutions for over 15 years. Sulkes currently serves as the AVP of marketing for Allot Communications. Prior to Allot, Sulkes managed a large-scale telecom engineering project and served in different software engineering capacities. Sulkes has an M.Sc. in Electrical Engineering and B.Sc. in industrial engineering and management from Tel-Aviv University. Comments are closed
April 7, 2016 According to a Service Max survey, 75 per cent of people who typically call out a field service technician because the product has broken, not for maintenance purposes. What this means for field service professionals is that when a customer calls, they’re likely needing a rapid fix. That’s why the first-time fix rate is the holy grail of field service providers. As head of managed service provider IT Specialists (ITS), I’ve found that to keep second site visits to a minimum and improve the customer experience, field service managers should avoid these mistakes. Mistake #1: Inefficiently Managing Spare Inventory The Service Max survey referenced above indicated that if an engineer had to return to the customer site, 61 per cent of the time, it was because the technician didn’t have the parts needed to solve the issue. At ITS, we solve this problem by assigning engineers to four regions across the UK. We also use nine regional depots located across the country, whichenables engineers to store and gather replacement parts quickly for customers. This strategy has enabled us to offer low on-site response times tied to service level agreements and to achieve a first-time fix rate greater than 92 per cent (according to Aberdeen Group, the average for best-in-class field service organisations is 88 per cent). Mistake #2: Mismanaging Engineers’ Skills Investing in training and additional certifications will widen the organisation’s pool of engineers who are equipped to work on certain equipment or software. To ensure each assignment is a proper fit for the engineer’s unique skills and certifications, the business can approach the dispatch process methodically and strategically. For example, senior engineers can use their experience with the business and their familiarity with engineers’ capabilities to schedule site visits. Mistake #3: Not Offering Preventative Services Even better than achieving a first-time fix is preventing a system malfunction in thefirst place. This is particularly important if the customer has recovery time objectives to meet for business continuity and disaster recovery purposes. At ITS, we use the remote management tool N-able, which is installed on the customer’s servers and desktops and allows us to monitor most of the customer’s systems. If a potential issue occurs, our technical support team can respond to the issue before the customer is even aware it exists. For example, we use N-able to manage printers for Howden’s Joinery, a UK-based manufacturer and supplier of kitchens and joinery products. Previously, Howden’s printers were not networked, consumables were unmonitored, supplies replenishment was not automated, and paper use was not cost-effective. Having implemented monitoring software (after networking the printers), we are now able to address any issues with the printers and manage the supply of consumables. Mistake #4: Succumbing to Business As Usual It’s all too easy to fall into a routine ofperforming processing a certain way because “that’s how we’ve always done it,” but it’s important to continually generate fresh ideas and solutions for business challenges. The organisation could hold a monthly review meeting where heads of the department review the past month’s performance and conduct real-time SWOT analyses on every aspect of the business. The meeting could encompass performance reviews, business threats, resource planning, development opportunities, and statutory and legal responsibilities. ITS uses these meetings to generate innovative ways to solve client problems as well. For instance, road freight company Baxter Freight wanted us to not only provide new hardware and build a network but also brainstorm ways to future-proof their business. The plans had to benefit both ITS and Baxter Freight, with products that were cost-effective for both businesses. Working together, the ITS team created a strategy for improving Baxter Freight’s business resilience. The strategyincluded plans to adopt larger products, such as a managed cloud-based disaster recovery as a service platform, as the business became more established. Mistake #5: Failing to Familiarise Engineers With Product Offerings Whether engineers are supporting a product sold by the organisation or providing a service offered by a managed service provider (MSP), they need to be familiar with all the products and services the organisation provides. Using this knowledge, the engineer can suggest other solutions that can solve the client’s unique business challenges. For instance, an MSP’s engineer might go on-site to repair a server and hear the client mention that the organisation is having trouble coping with data sprawl and is considering virtualising some of their environment. The engineer knows that the MSP offers cloud-based infrastructure as a service (IaaS), so the engineer can suggest that as a solution. While plugging services that are unrelated to field service might seemcounterproductive, doing so shows the customer that the organisation is able to meet the client’s business objectives. In turn, the client is more likely to continue a relationship with the business. Mistake #6: Neglecting Regulatory Requirements Regulatory compliance is a pressing concern for organisations across multiple industries. That’s why field service organisations need to be able to demonstrate that they can meet regulatory requirements. The organisation might choose to adopt a business continuity standard or undergo a third-party accreditation process to achieve a certification such as ISO 9001 for quality management systems or ISO 27001 for information security management systems. By avoiding these pitfalls, field service organisations will increase their first-time fix rates, improve their ability to prevent issues before they occur and help clients meet their business goals. About Matt Kingswood Matt Kingswood is the Head of Managed Services of Midlands and London-based ,a nationwide Managed IT services provider. ITS is part of the US Reynolds and Reynolds company which has a strong heritage in data backup and recovery services. In his position, Matt is responsible for developing Managed IT services within the UK and is currently focused on the next generation of cloud and recovery products,   and  . Matt has more than 20 years of experience in the information technology industry, and was formerly CEO of The IT Solution – a full service IT Supplier acquired by ITS. Since joining ITS, he has led efforts to introduce a range of managed services based on the new ITS cloud platform. Previously Matt had a career in technology for several top tier investment banks before founding and selling several companies in the IT services industry. Matt has an MBA from The Wharton School of the University of Pennsylvania and a Master’s in computer science from Cambridge University. Comments are closed
April 7, 2016 To work on the Incapsula team at Imperva is to be exposed to DDoS attacks all of the time. From watching 100 Gbps assaults making waves on computer screens around the office, to having our inboxes bombarded with reports of mitigated assaults, DDoS is just another part of our awesome daily routine. Yet, every once in a while an attack stands out that makes us really take notice. These are the ones we email each other screenshots of, discuss with the media and write about in our blog. Often, these assaults are canaries in a coal mine for emerging attack trends. It’s one of these canaries that I want to talk about here—an attack that challenges the way we think about application layer DDoS protection. A bit about application layer DDoS attacks Broadly speaking, layer 7–aka application layer–DDoS attacks are attempts to exhaust server resources (e.g., RAM and CPU) by initiating a large number of processing tasks with a slew of HTTP requests. In the context of this post itshould be mention that, while deadly to servers, application layer attacks are not especially large in volume. Nor do they have to be, as many application owners only over-provision for 100 requests per second (RPS), meaning even small attacks can severely cripple unprotected servers. Moreover, even at extremely high RPS rates—and we have seen attacks —the bandwidth footprint of application layer attacks is usually low, as the packet size for each request tends to be no larger than a few hundred bytes. Consequently, even the largest application layer attacks fall way below 500 Mbps. This is why some security vendors and architects pitch that it is safe to counter them with filtering solutions that don’t necessarily offer additional scalability. A ginormous HTTP POST flood The attack that challenged this theory occurred a few weeks ago, when one of our clients—a China-based lottery website—was the target of a HTTP POST flood attack, which peaked at a substantially high rate of 163,000RPS. Attack traffic in RPS (requests per second) As significant as this request count was, the real surprise came when we realized that the assault was also consuming bandwidth at 8.7 gigabits per second (!)—a record for an application layer attack and definitely the largest we had ever seen or even heard about up until that point. Attack traffic in Gbps (gigabits per second) Looking to understand how an application layer attack could reach such heights, we inspected the malicious POST requests. What we found was a script that randomly generated large files and attempted to upload (POST) them to the server. By doing so, the perpetrators were able to create a ginormous HTTP flood, consisting of extremely large content-length requests. These appeared legitimate, up until the TCP connections were established and the requests could be inspected by —our application layer DDoS mitigation solution. The attack campaign was launched from a botnet infected with a  malware variant. From there, itwas accessing the website under the guise of a Baidu spider, as seen above. Overall, the attack traffic originated from 2,700 IP addresses. The bulk were located in China, as evidenced by the map below. Why 8.7 Gbps DDoS spells trouble for hybrid DDoS protection When taken out of context, an 8.7 Gbps attack may not seem like cause for concern—especially these days, when security service providers, , regularly share reports of 200, 300 and 400 Gbps assaults. However, these attacks are all network layer- they’re expected to be large. On the other hand, a multi-gigabit application layer assault is an unforeseen threat. As such, it can succeed where a much larger network layer attack would fail. This is because application layer traffic can only be filtered after the TCP connection has been established. Unless you are using an off-premise mitigation solution, this means that malicious requests are going to be allowed through your network pipe, which is a huge issue for multi-gig attacks. Acase in point are hybrid DDoS protection solutions, in which an off-premise service is deployed to counter network tier threats, but an customer-premises equipment (CPE) is used to mitigate application tier attacks. The bottleneck in hybrid DDoS protection topology While conceptually effective, the Achilles heel of this topology is network pipe size. For example, to successfully mitigate a ~9 Gb layer 7 attack—like the one described—a CPE would require a 10 Gb uplink. Otherwise, the network connection would simply get clogged with DDoS requests, which cannot be identified as such until they establish a connection with the appliance. An insufficient uplink in this situation would result in a denial of service, even if the appliance filters the requests after they go through the pipe. Granted, some of the larger organizations today do have a 10 Gb burst uplink. Still, perpetrators could easily ratchet up the attack size, either by initiating more requests or by utilizing additionalbotnet resources. Hence, the next attack could easily reach 12 or 15 Gbps, or more. Very few non-ISP organizations have the size of infrastructure required to mitigate attacks of that size on-premise. Furthermore, application layer attacks are easy to sustain. Recently we witnessed one that extended for over , while even ten days of burst creates a nightmare in overage fees. From a financial point-of-view, this is one of the main reasons why DDoS mitigation solutions exist—to offer cost-effective scalability as an alternative to paying for high commits and overages. The canary in the coal mine Experience has shown that effective DDoS methods are rarely an exception to the rule. As we speak, the aforementioned attacking botnet remains active and the technique used in the attack is still being employed. Furthermore, it is likely to become more pervasive as additional botnet operators discover its damage potential. The existence of these threats make another good case for off-premisemitigation solutions that terminate HTTP/S outside of the network perimeter. They are unrestricted by your network’s pipe size and are able to scale on-demand to filter any amount of application layer traffic. This is exactly what happened with the above mentioned 8.7 Gbps layer 7 assault, when our Website Protection was able to handle the specific HTTP flood attack vector automatically and out-of-the-box. Having said that, we do realize that some organizations are under regulatory obligation to terminate HTTPS encryption on-premise, and have no choice but to use mitigation appliances. If this is the case, our best advice is to consider upgrading your uplink so that it can at least counter attacks below 10 Gbps. One way or another, this assault is a reminder to consider scalability when strategizing defense plans against application layer attacks. Further details about the attack can be found on . About Imperva (NYSE:IMPV), is a leading provider of cyber security solutions that protectbusiness-critical data and applications. The company’s SecureSphere, Incapsula and Skyfence product lines enable organizations to discover assets and risks, protect information wherever it lives – in the cloud and on-premises – and comply with regulations. The Imperva Application Defense Center, a research team comprised of some of the world’s leading experts in data and application security, continually enhances Imperva products with up-to-the-minute threat intelligence, and publishes reports that provide insight and guidance on the latest threats and how to mitigate them. Imperva is headquartered in Redwood Shores, California Comments are closed
April 6, 2016 Security researchers and hackers are caught up in an endless game of cat and mouse, with threats constantly evolving to thwart even the most stalwart of defences.  Traditional methods of combatting new threats, reliant on signature based approaches to detecting malicious files, URLs, or IP addresses, are failing to block more sophisticated attacks resulting in an overwhelming number of attacks slipping under the radar.  Even the much acclaimed sandbox approach has recently come under attack, as hackers are finding innovative new ways to detect that code is running in a virtual environment and to lay dormant until released from captivity. It’s not just the tactics that have dramatically changed, so too has the nature of ‘end points’ themselves.  Today they are just as likely to reside in the cloud or be a mobile or tablet owned by the employee, as a traditional laptop or PC.  And as the IoT comes of age the number and nature of end points in need of protection could spiralout of control. The stark reality is that traditional security defences that use static signature-based methods to determine whether a file is malicious or benign are simply not up to the job. What’s more analysing the binary structure of suspected malicious code to identify similarities with different files or families of malware is only marginally more effective, since attackers can quickly adapt and create more variations on the theme that will render statistical, mathematical models almost as useless as a normal static signature. A new, more robust, disruptive approach that focuses on the actual core of malware, its behaviour – which cannot change as easily as its hash or other static indicators – is way overdue. A new Era of Endpoint Protection Enter the next generation of end point (NGEPP) solutions, which – like their cybercriminal adversaries – have dramatically evolved their modus operandi.  Their emphasis is on a behaviour-based approach to malware detection which – unlikethe signature, or sandbox approach -is not content to concentrate solely on mitigation; but focuses instead on offering real-time prevention, detection and mitigation along with forensic analysis across the entire attack lifecycle. The ability to see what is running on an endpoint, and how every application or process is behaving, is key to combatting the detection problem.  What’s more this analysis needs to happen at the scene of the crime, namely the end point itself.  Like any disguise, it’s a lot easier to change your appearance than it is to change the way you act.  By tracking the behaviour of a threat in real-time from the point of detection, to mitigation, remediation and forensic analysis, security teams are able to start to bring advanced malware and zero day exploit threats under control. Recognising the ‘Masters of Disguise’ So how does NGEPP work?  A layer of pre-emptive protection initially stops existing known threats in their tracks at the point of entry, replacing thecapabilities traditionally provided by antivirus or host-based IPS.  The sheer volume of new threats that surface daily, including new forms of malware, zero day exploits or insider threats using tools like Powershell to avoid detection, mean you need to go much deeper than simply protecting against known threats, to detecting previously unknown threats.  New end point technology is capable of detecting these new, stealthy threats not by what they are, but by how they act, regardless of what disguises they might use to try and evade detection. Tackling these unknown, targeted attacks requires real-time monitoring and analysis of application and process behaviour as well as the ability to determine the context of the attack to minimise the possibility of false positives.  This inspection needs to occur even when the user is offline to avoid the possibility of USB or other infected digital devices becoming the source for an attack.  In this way, even attacks which have never been seenbefore can be detected and stopped at their source. However, to complete the task it’s vital to ensure that the final steps of mitigation and forensic analysis are performed in order to complete the whole process and prevent the possibility of any reoccurrence.  In order to avoid any negative residual impact, the NGEPP should be capable of responding to an attack in a variety of different ways such as: quarantining a file, killing a process, disconnecting an infected machine from the network or shutting it down completely.  This needs to be automated to ensure that it occurs before the threat has a chance to ‘phone home’ to a command and control server to deliver its payload, or move laterally. Rolling Back Time To ensure the network returns to its former state and doesn’t harbour any unwanted vestiges of the attackers visit such as modified files or an encrypted hard disk from a ransomware attack, the end point software should be capable of rolling back to a pre-attack status.   Thefinal part of the puzzle is figuring out what caused the attack and that’s the forensics part.  It’s vital to be able to quickly analyse the scale and scope of the attack, pinpointing who was targeted and with what type of threat.  These learnings accelerate the remediation process and help organisations avoid a similar situation occurring further down the road. With the advent of new regulations like the EU Data Protection Regulations looming on the horizon, it has never been more important to secure and protect sensitive data.  Businesses everywhere are waking up to the fact that legacy security approaches are becoming less and less effective against an arsenal of constantly evolving attacks by cybercriminals, nation states, and terrorist organizations.  As the risks and regulatory fines escalate dramatically, a new generation of security companies are rising to the challenge and proving worthy adversaries to hackers.  NGEPP promise to provide the mousetrap to put an end to theeternal cat and mouse game of one-upmanship that has dogged the security profession for far too long and to put security professionals back in control of their IT environment once again. About Tomer Weingarten Tomer co-founded , a next generation endpoint security company in 2013. He is responsible for the company’s direction, products, and services strategy. Before SentinelOne, Tomer led product development and strategy for the Toluna Group as a VP of Products. Prior to that he held several application security and consulting roles at various enterprises, and was CTO at Carambola Media. Comments are closed
April 5, 2016 Never before has Mac OS X been as heavily targeted by cybercriminals as now. Whereas infections like browser hijackers and ad-serving malware aren’t newcomers on the Mac arena, crypto ransomware appears to be making first baby steps toward the invasion of this huge niche. The term denotes a cluster of malicious programs that stealthily infiltrate into computers, encode the victim’s personal files and extort money, usually Bitcoins, in exchange for a secret decryption key. Windows users have been suffering from file-encrypting Trojan assaults for years, with the early incidents recorded back in 2011. As opposed to that, Apple’s strong focus on code verification and elaborate security mechanisms held back the nastiest of attacks. Maintaining the status quo, however, turned out to be a nontrivial challenge. Ironically enough, it is white hat researchers who pioneered in creating Mac ransomware, and perpetrators simply followed suit. A Wake-Up Call In November 2015, aBrazilian security enthusiast Rafael Salema Marques demonstrated that Mac OS X isn’t bulletproof against ransomware plagues. He spread the word about his proof-of-concept where a program he dubbed Mabouia was able to get around the defenses of a Mac machine and wreak havoc with files in a matter of minutes. The PoC infection is written in C++ and applies 32 rounds of XTEA block cipher to encrypt data and thereby render it inaccessible. Just like real-world ransomware, it generates a 128-bit private key, transmits it to a C2 server and recommends a sleek recovery service requiring a fee. Marques also added some ransom pricing flexibility to the mix, playfully offering three different payment models to hypothetical targets. The “Not as Important Plan” implies the decryption of 20 files and a handshake for $50; the “Important Plan” presupposes the recovery of 100 files plus a hug for $70, and the “VIP Plan” guarantees the decoding of all files and a kiss as a bonus for $100. All of theabove go with “lifetime support” which is particularly funny. Mabouia is executed when a Mac user extracts a ZIP archive, which can be delivered over a phishing email disguised as a missed delivery notification, a payroll or similar eye-catching subject. Since the app only targets files stored in the User folder, it can do without elevated privileges to make changes to data. All in all, this PoC should have raised some flags because it was the first viable crypto malware tailored for Mac. The author provided his full code to Apple and Symantec so that the security researchers could prep countermeasures for likely attacks that aren’t purely educational. The lesson, however, hasn’t been learned, and the bad guys ended up outsmarting the industry. The Menace Gets Loose Things started getting out of hand as the first real-world Mac ransomware emerged in early March 2016. Referred to as KeRanger, the strain initially circulated over a poisoned downloader of Transmission 2.90, an edition ofa popular open-source BitTorrent client compatible with Mac OS X. The hackers had managed to compromise the official Transmission web page and replace the legit application’s DMG file with a malicious loader. Consequently, everyone who installed the aforementioned version ended up catching the ransomware. Unimpeded distribution of the KeRanger app stemmed from the fact that it was signed with a valid Mac developer certificate. Apple’s Gatekeeper, therefore, didn’t identify or block it on the early stage of the campaign. For some reason, the infection remains in a dormant state for three days after its code is executed on a target Mac box. Then, it traverses the hard drive in order to spot files matching a certain predefined range of extensions. It looks for personal documents, images, videos, databases and other potentially important data. KeRanger continues the onslaught by reaching out to its Command & Control via The Onion Router technology and obtaining a unique encryption key. Thevictim’s files ultimately become encrypted with 2048-bit RSA algorithm. This crypto is asymmetric, which means that the criminals’ server is the only place keeping the private decryption key. The ransomware displays a document named README_FOR_DECRYPT.txt, which instructs the infected Mac user on how to recover the data. In particular, the victim needs to send 1 BTC, or around $400, to redeem what’s locked. KeRanger operators only accept Bitcoins, because it guarantees the anonymity of payment transactions and helps them evade tracking by the law enforcement. To prove that the deal is real, the scammers can decrypt one file for free. To their credit, Apple withdrew the rogue app development certificate shortly after the malicious campaign commenced. KeRanger in its original form and shape is, therefore, unable to bypass Gatekeeper and run on Mac machines at this point. The vendor of the Transmission applet promptly adopted measures as well, cleaning up their website from malware andposting a notification about the necessity of an immediate upgrade to a safe version 2.92. And yet, the fact that the incident took place keeps a question mark hanging over the efficiency of ransomware response mechanisms. Evolution of Mac Ransomware In fact, there are other breeds of Mac ransomware at large, but those are browser lockers rather than crypto viruses, and the damage isn’t nearly as high. The infamous FBI MoneyPak malware affects Safari on infected Macs by displaying a persistent page that impersonates the FBI. The warning message contains false accusations of illegal user activity such as a violation of copyright and distribution of prohibited adult content. It also says that all file were encrypted, but that’s total bluff. All it takes to resolve the issue is reset Safari. As opposed to ridiculously primitive browser lockers, the Mabouia proof-of-concept and KeRanger are the first samples of Mac ransomware code that actually encrypts victims’ files. As it turned out,Apple’s security barriers aren’t much of an insurmountable obstacle for cybercriminals. This obvious progress in attack vectors and techniques gives us a glimpse of what the future holds: ransomware may start targeting Mac OS X and will quite likely become a number one security concern for Mac aficionados in the near future. About David Balaban David Balaban is a computer security researcher with over 10 years of experience in malware analysis and antivirus software evaluation. David runs the project which presents expert opinions on the contemporary information security matters, including social engineering, penetration testing, threat intelligence, online privacy and white hat hacking. As part of his work at Privacy-PC, Mr. Balaban has interviewed such security celebrities as Dave Kennedy, Jay Jacobs and Robert David Steele to get firsthand perspectives on hot InfoSec issues. David has a strong malware troubleshooting background, with the recent focus on ransomware countermeasures.Comments are closed
April 5, 2016 The threat landscape in 2016 is almost completely unrecognisable from that of ten years ago. Today’s landscape is populated by actors who are well resourced, highly determined and increasingly sophisticated, not to mention motivated by anything from ideology (hacktivists and cyber terrorists), geopolitical gain (state-sponsored hackers) or, most popularly, money. While there are still the worms and viruses of old popping up, most cyber criminals have all but abandoned these vectors in favour of more targeted, covert and successful attacks. Targeted attacks and Advanced Persistent Threats (APTs) first surfaced publically in around 2010, when the so-called Operation Aurora attacks on Google and others foreshadowed the firm’s exit from China. Stuxnet quickly followed and suddenly the floodgates were open. Typically beginning with a “spear phishing” email or social media message using social engineering techniques, malware is the triggered to download onto the system. Themalware will quietly load in the background without the user’s knowledge, escalating privileges inside the network until it finds the data it’s looking for. Attackers spend time researching their targets on the internet to hone their phishing lures, and are increasingly zeroing in on IT administrators, whose privileged accounts will give them unlimited access. They also spend time researching possible vulnerabilities on the system so that the malware can bypassing existing defences. The cybercriminal underground that sits beneath all of this on the “Dark Web” of anonymisation networks like Tor and I2P and private forums is a immense, enigmatic beast. Estimates have put its size between 4-500 times the size of the “surface” web. There cybercriminals buy and sell stolen credit cards, identities, exploit kits and other attack tools which have democratized the ability to launch sophisticated targeted campaigns. The fact that enterprises are now hugely more exposed to such threats through aflood of new vulnerabilities appearing every month, and through an explosion of new cloud services and applications, makes the bad guys’ jobs even easier. That organisations have to secure these increasingly complex environments with minimal budget is just the icing on the cake. Yet the stakes are higher than ever. The average cost of a data breach in 2015, up 23% in just two years. The repercussions are immense: loss of brand and shareholder value, damage to customer loyalty, legal costs, financial penalties, and remediation and clean-up costs to name but a few. Target that losses related to its massive breach totalled $148m, a staggering amount but one that just begins to scratch the surface. A losing battle? Given the size, scale and sheer organisation of the cybercrime underground – notwithstanding the threat from state-sponsored attackers and hacktivists  – it’s not surprising that the security industry is continuously on the back foot. Its adversaries are more agile, and have theelement of surprise and the cloak of anonymity on their side. Slowly the security industry has adapted – building new solutions which moved away from the old static AV signature-based model. Firstly, by developing heuristics detection – which spotted malware based on characteristics in its code – and also through behavioural-based techniques. There’s also been a shift to cloud-based threat prevention systems which stop or block threats before they hit the network. The new generation of tools pioneered by the likes of FireEye and Trend Micro are designed to stop those all-important zero-day threats often used in targeted attacks – that is, those which exploit as-yet-unseen flaws. Sandboxing executes an unknown threat in a virtual environment in near-realtime to see if it’s dangerous or not. Security vendors have also been developing tools which leverage big data analysis of customer data and threats in the wild to identify and correlate new malware. Such is the sheer volume of threatsthat these companies need vast data centers and computing power to even stay on a par with the cybercriminals. Security is broken Yet, after all that investment software security vendors still admit that the best security stance for a CSO today is to accept that they have already been breached. If a hacker is determined enough they will get into your organisation. The best the industry can do is to provide systems which try to spot when this has happened as soon as possible in an effort to minimise the risk of data loss. It is easy to see why organisations are reducing their security budgets when security software clearly is clearly broken. Did you know: Your pc/mobile device can be compromised just by visiting a malicious webpage? Targeted attacks go undetected for months or even years Around are clicked on, irrespective of volume Opening a malicious PDF or Word attachment could lead to a covert, multi-year data breach The increased 56% in 2015 Apple products are not immune. There arehundreds of thousands of new malware strains discovered every day. The pace of malware creation is increasing all the time: the volume of malware found last year accounts for one third of all malware ever written. From this is it easy to see why security is broken. Organisations need to find a new way of stopping these attacks, and if software-based solutions aren’t working then it is time for stronger, more resilient hardware-based solutions. About Cesare Garlati   Chief security Strategist,   Cesare is an internationally renowned leader in mobile and cloud security. He is the former Vice President of mobile security at Trend Micro and Co-chair of the Mobile Working Group at Cloud Security Alliance. The prpl Foundation is an open-source, community-driven, collaborative, non-profit foundation supporting the next gen connected devices industry that supports and provides guidance for a hardware-led security approach to IoT. Comments are closed
April 4, 2016 In a world of technological dependence, I like most other professionals suffer from increasing degrees of paranoia, and fear that my person, presence, and logical footprint may be subject to some form of compromise, interception, or manipulation from any one of many exposures – a Paranoid State which has driven my acquisition and use of multiples of security defences with which I reduce my surface of attack from State-Sponsored invaders of all colours be they Chinese driven by Titan Rain type events, American under the banner of Prism; or any other manifesting out of the criminal-ventures which could have impact on my personal, and financial wellbeing. So, having established that I am suffering from what I feel is an informed state of healthy paranoia, I have taken a number of steps to secure my operational use of technology by employment of a number of easy to use solutions which underpin a desired level of a safe technological lifestyle encompassing: Mobility > e-Mail >Telephony > Messaging To accommodate a level of serenity, I have evolved usage of, or recommend the following applications and tools, and start the conversation with focus on securing mobile telephony, repressing opportunities for all to enable of modicum of security into the life of the common man [and woman] when they make that call: Mobile Telephony: On occasions where there is need to ensure that the mobile calls I make from my Cell Phone are subject to enhancement of security, over the basic service, I employ the Blackphone solution out of the Silent Circle stable [This security enhancement comes in two offerings. Number 1 being hardware based device of the Blackphone cell-phone, fully enabled with their own modified circuitry, chipset, and in-built security functionality. Option 2 is in the form of a localised software installation on your own cell-phone, which in my case is an IPhone 6s. Whilst in both cases the user can make insecure none-encrypted calls to Granny, the keyfeature is, where the conversation is sensitive the Blackphone user may go secure and invoke the required level of VPN encapsulation to protect conversations. This providing a Black-to-Black fully fledged end-to-end secure communications channel; or Black-to-None–Black end device, which would be secured to the point of the Silent Circle Server presence only, with the onward unsecured channel out of that environment being delivered to the none complaint none Blackphone device – but then here half security is better than none. This service works well, is low cost at around $10 per month, is stable and represents for me a very good ROI. e-Mail Security: When it comes to security of a cross-platform e-Mail system, with focus on all users who deserve to have the choice of using a mail platform that enables them with a level of defence without the need to get too tech. Here I often recommend Protonmail [Protonmail is service delivered out of Switzerland, and serves up the functionality toaccommodate various levels of security and of course encryption. As with Blackphone Protonmail-to-Protonmail provides a fully secured channel between service enabled users. However, with Protonmail-to-none Protonmail environment, again as with the Blackphone the second leg of the logical journey is insecure. But here the user may impose a higher level of security by selecting additional levels of encrypted control which require the recipient to enter a password to decrypt the secured content. But this solution goes further and also allows the sender to set time-to-live rules against the communication, and to label the type of communication [e.g. Business, or Private etc.]. At Fig 1 below shows some of the key features of the mail application in action:   Fig 1:   Secure Messaging: We all utilise text messaging from time to time, and in this space my solution of choice comes in the guise of Wickr which supports iOS, Windows, Mac, Linux 32 & 64 bit, and of course Android [Again here wehave a very capable tool which enhances the security profile of this common activity by encryption, as well as other supporting key security features such as time-to-live, and Secure Shredding. Easy to use, and is also available for use in the corporate space with their Enterprise solution – great features, and highly recommended. [See below Fig 2] Fig 2 – Wickr Mobility and the VPN: Beit personal, or business related, we all encounter the dangers of connecting to public access points in hotels, airports, and of course on public transport. On such occasions as this, as soon as we go promiscuous over Wi-Fi, our communications are potentially open to man-in-the-middle attacks which can sniff out our passwords, and other such private/personal details. It is in this space my personal option of choice is to employ the very robust solution IPVanish [to secure my channels before I touch any potentially hostile, open link [and trust me I know having been compromised myself at time of an urgentrequirements]. IPVanish is an easy to use security tool which mitigates what can be a significant and dangerous exposure when embarking on travels. Se Fig 3. Fig 3 The above are just a few tools which are available to be used by even the most none-tech-savvy person who wishes to implement a tad of security to protect their logical-life. It may not be the ultimate desire of everyone to be Paranoid, but in my cases it does help with relaxation at night. About Professor John Walker Visiting Professor at the School of Science and Technology at Nottingham Trent University (NTU), Visiting Professor/Lecturer at the University of Slavonia [to 2015], Independent Consultant, Practicing Expert Witness, ENISA CEI Listed Expert, Editorial Member of the Cyber Security Research Institute (CRSI), Fellow of the British Computer Society (BCS), Fellow of the Royal Society of the Arts (RSA), Board Advisor to the Digital Trust, Writer for SC Magazine UK, Originator of DarkWeb Threat Intelligence, CSIRT,Attack Remediation and Cyber Training Service/Platform, Accreditation Assessor and Academic Practitioner and Accredited Advisor to the Chartered Society of Forensic Sciences in the area of Digital/Cyber Forensics. Twitter:   John Walker is also our Expert Panel member.  To find out more about our panel members visit the page. Comments are closed
April 4, 2016 That the CMS Wordpress is a common choice in blog platforms everybody knows, but what we see is that this use most of the time is implemented with no security countermeasures (according to the OWASP Top Ten 2013 – The Ten Most Critical Web Application Security Risks, the category Security Misconfiguration is in the fifth position), even when the website was already compromised before. To avoid some of the threats and increase the security level we inform below some of the best practices in hardening of CMS Wordpress: Use strong passwords: with letters, numbers and special characters, and longer than 12 characters. Is important to avoid to use common informations about yourself like your birthday or something related and also words found in a conventional dictionary even if it is in another language. Avoid to use out of date and/or unknown (with no recommendation) plugins and themes, or that was obtained through piracy (commonly used to spread web malwares). Also search ifit has a  and if you found any of your plugins or themes in this list and don’t have pack/update after the date shown, deactivate it as soon as possible. Also is possible to configure automatized update on the configuration file of Wordpress, more details access . Keep daily or weekly (or the period of your choice) backup routines (automatically) that store the files in other server (remote), try to use sftp or SSH to proceed the transfer of this – . Put your website behind of a WAF (), that will analyse all the HTTP requests (often GET and POST) and blockade the bad ones (that matched in a malicious network signature). A well known open source WAF is the Apache ModSecurity. Put script verification/detection mechanisms in all the comments text boxes and subscribe newsletter or contact form to avoid SPAM incidents by the website. Adds blank index.php within of the directories, because is common to host the website in shared server which isn’t possible personalize the web serviceconfiguration and the directory listing option is often enabled. Normally creates this file in the directories “wp-includes”, “wp-content”, “wp-content/plugins”, “wp-content/themes” and “wp-content/uploads”. Put digital certificate in all the pages of your website (HTTPS, prefer TLS order than SSL v3.0 (CVE-2014-3566)) both publicly accessible and restricted. More details . Avoid to use more than one website within an account (commonly in Plesk or cPanel systems), because if only one was compromised the invasion will spread to the others and this security incident will have a huge impact in all your business. About Icaro Torres Icaro Torres is a technologist of network computer and postgraduate in information security, that works in the HostDime Brazil with technical support and audit/security of the systems hosted in Datacenters of the company. He is contributing in the OWASP with translation projects and in the chapter in his city. He continuously studies about web applicationsecurity, pentest and malware analysis. Comments are closed
April 1, 2016  The rapid development of drone technology and growing awareness of their potential threat has lead to a burgeoning drone detection market. Technology providers offer reliable detection mechanisms, but now organizations face a new challenge: How do you respond to an alert? Each drone countermeasure has its own pros and cons, and choosing the right one is no easy matter. Just as there are multiple drone detection mechanisms, there are also multiple drone countermeasures. When creating a drone response plan, organizations have to take into account legalities associated with the airspace around them, as well as the feasibility and pros and cons of each countermeasure. No single response is ideal for every threat or even every organization within a single industry. Counter-drone measures can be divided into three categories: 1. Regulation, Manufacturing Standards (Registration, License Plates, Pilot License, etc.) This approach involves using a drone’s registration andlicense plates to report the pilot. A drone detection system should feature a camera that records every intruding drone. The recording is saved with all the other data, including date and time. The recording and data can be recalled at any time, such as for investigative purposes. This countermeasure offers a number of advantages. It allows the organization to identify the owner and, because the incident is addressed through the authorities, there is more transparency and less liability for the reporting organization. It also means fewer pilot failures because the drone isn’t directly attacked. Of course, this approach is only feasible if the drone is registered and has license plates. This is unlikely to be the case in high-risk scenarios involving terrorists or criminals.  No-Fly Zones/Geo-Fencing A geo-fence is a virtual barrier that prevents drones from flying in defined areas. A software program defines the boundaries of a no-fly zone via global positioning system (GPS) or radiofrequency identification (RFID). The primary advantage of geo-fencing is that it can reduce the risk of unintended threats by preventing drones from entering the no-fly zone. However, not all drones use this technology, it can be circumvented and there are several other approaches that make it hard to use reliably today. 2. Passive measures Passive measures involve reducing the threat posed by the presence of a drone without actually disrupting the drone. If the drone is detected on time you can: send security personal to intercept the drone, lead people to safety, block the drone’s view, lock cell doors and gates in the case of a correctional facility, and search the site for dropped objects. This approach offers several advantages. Depending on the application, it can be highly effective. It doesn’t require approval from authorities and can be combined with the countermeasures previously mentioned. It reduces the risk of someone getting hurt as a result of a crash. However, thereinlies this countermeasure’s number one disadvantage: The drone is not stopped. A dangerous payload may still be delivered and, in the meantime, productivity takes a hit as you attempt to mitigate the risk to your people and other assets.  3. Active measures Active measures physically stop the detected drone. This is their number one advantage. In most cases when drones are stopped they present a crash risk, which can cause physical harm and even fatalities, especially in heavily populated areas. Another drawback is that in most countries these can only be used by law-enforcement in the case of an imminent threat. Active Countermeasures include: Jammer, Spoofer Jamming or spoofing a drone’s radio connection or GPS is currently the most practicable and effective active countermeasure which will cause the drone to either return to its start position, sheer away, land or crash. Unfortunately, there’s no way to tell until you do it. This drone countermeasure can also affect other radio andGPS connections in the vicinity and is difficult to execute with drones in auto pilot mode. It’s also subject to approval by your local authorities. However, jamming or spoofing does offer an additional advantage beyond taking down the drone: It leaves open the possibility of eventually tracking down the pilot. Firearms, electromagnetic pulse (EMP), laser You can also choose to take down intruding drones using firearms, EMP or laser. In this case, the drone is destroyed and crashes. Firearms are only effective at low range, so they have minimal use cases, and are subject to approval by your local authorities – as well as EMPs and lasers. These are military technologies and therefore not economically viable. Counter-Drone Taking down an intruding drone with a counter-drone reduces the risk of a crash. However, it requires having a competent pilot at the ready, 24/7 to respond to intruders. In addition, the counter-drone must be extremely powerful. Both of these factors make thiscountermeasure cost prohibitive for most organizations. Net canon The final active countermeasure offers the benefit of stopping intruding drones without minimal crash risk. It involves shooting a net over the drone from the ground with a net cannon. Unfortunately, this approach also offers the greatest disadvantages in that it is only effective at low range and has a low success rate. As you can see, choosing the most effective drone countermeasure is no easy task. However, just as the most effective drone detection systems combine detection methods to ensure accuracy under varying conditions and to reduce false positives, they should also offer you flexibility in deploying a variety of drone countermeasures. Organizations should also look for a provider who will serve as a consultative partner in identifying the appropriate countermeasure for your use cases.  About Jörg Lamprecht01 CEO, Co-Founder, In 1996, while still studying maths and computer sciences at the University ofKassel, Jörg Lamprecht set up his first company, Only Solutions GmbH, with Rene Seeber and another fellow-student. The software company really lived up to its name: one of the products it developed was the first search engine for pictures on the internet, which was used – among other things – to trace missing children. Only Solutions was later renamed Cobion and now belongs to IBM. In 2006, Jörg founded Qitera. In 2011, he discovered the emerging market for drones and responded by founding Aibotix, a company that produces unmanned aircraft for professional use by surveyors and engineers. Aibotix was sold to the Hexagon group from Sweden in February 2014. At Dedrone, Jörg uses his expertise as founder and manager for leading the areas business development, sales and marketing. His special focus is on setting up international partner and distribution networks.
March 30, 2016 Notes from the Battlefield: Cybercriminals vs. Business Travelers and How to Keep Your Data Safe It used to be that a business trip was just a business trip, complete with pay-per-view TV in bed, tiny bottles of shampoo and room service for anyone feeling extravagant.  Yet in today’s era of global business travel, mobile devices, and ever-more-sensitive digital data, a seemingly innocuous stay in a hotel could result in disastrous security breaches for business travelers and the companies they represent.  What are the security concerns currently affecting executive travelers, and how did they creep undetected into the hospitality industry to muck up a relatively good thing?  More importantly, what can executives and security professionals do to fight back? Tinker, Tailor, Soldier, Spy following a spate of cyber attacks that targeted executive-level guests at luxury hotels in Asia.  First recorded in 2007, the attacks came to light more fully a few years later whenresearchers got reports about a cluster of customer infections. Here’s how it works:  Attackers infiltrate hotel WiFi networks and fool users into downloading malicious software that looks like a bona fide software update.  Once the user downloads the virus, an advanced key-logging tool is installed that enables the hackers to track passwords.  They relentlessly spearphish specific targets in order to compromise systems and use a P2P campaign to infect as many victims as possible.  To evade detection, the hackers delete their tools from the hotel network after the operation is finished. The original DarkHotel attacks were striking due to their sophistication and the suggestion of a state-sponsored campaign.  High-profile executives from businesses, government agencies and NGOs were among the targets, with located in Japan, Taiwan, China, Russia and South Korea.  Researchers believe that the initial DarkHotel campaign was likely the work of a nation-state campaign, with signs that itmay have originated in South Korea. Not Just for the 1% Anymore: DarkHotel For the Rest of Us   The cloak and dagger nature of the original DarkHotel campaign and its possible tie-in to government spying make it all too easy for more run-of-the-mill companies and executives to continue along their merry way, harboring the illusion that DarkHotel won’t affect them.  Sadly, that’s simply not the case. Cyber attacks on luxury hotels have expanded greatly since they were first discovered, potentially numbering in the thousands, among hundreds of hotels worldwide.  Starwood Hotels became a recent casualty of cybercrime late last year when , enabling unauthorized parties to access payment card data of customers. Corporate executives with valuable personal assets make enticing targets for hotel hackers.  However, cyber criminals often set their sights on a bigger target:  the victim’s corporate assets.  It’s easy to see how enterprise data is at risk given that hackers can gain access toeverything on a victim’s mobile devices.  They can also install malware targeting files, photos, built-in cameras and microphones, enabling a level of cybercrime unthinkable in the past.  And don’t forget that a hotel’s reservation database and keycard system can provide useful access to customer information. Not surprisingly, a new wave of cyber criminals has turned hotel hacking into a veritable free for all, often lying in wait to cherry-pick their targets.  There are businesses hacking competitors, governments hacking businesses, and governments hacking each other.  And let’s not forget regular old cyber thieves who are simply out to make a buck. As malevolent as it may seem, DarkHotel is a part of a digital ecosystem and the outgrowth of new ways of computing.  What trends in today’s technology landscape have allowed them to take root? The Evolving Digital Landscape:  DarkHotel 2.0 Two key technology trends have emerged that account for much of the DarkHotel phenomenon and explainwhy business travelers and their enterprise endpoints are exposed to significant security risks. First, executive travelers are connecting to data and services using their own mobile devices. This widespread practice has increased hacking possibilities exponentially, with enterprise data especially at risk as executives work and check in with their corporate offices from the road.   Not only do users often have several devices – smartphones, tablets, laptops, and wearables – but they’re weakly protected and regularly in use.  They also handle large volumes of increasingly sensitive data.  This is alarming since hackers can extract unencrypted or weakly encrypted data from devices, and even modify a device to obstruct security measures. The second trend in mobile computing presents a much bigger problem and involves executives using cellular or public WiFi networks rather than corporate networks. By taking data outside of corporate firewalls/IPS/NAS or DOS network protection, users areincurring risks to not only their own devices, but to others connected to the same business network.  Whether hotels own and operate their network infrastructure or use a managed services firm, most carry little to no security and don’t encrypt their public networks.   Sometimes hotels also have routers susceptible to easy hacking. Hackers take advantage of the fact that every wireless device is designed to trust the network to which it connects.  The threat is real: , resulting in exposure to commjacking of an estimated 10,000,000 laptops.  Accordingly, “man-in-the-middle” attacks where hackers lure users to connect to fake public or cellular WiFi networks have become the preferred strategy for so-called “commjackers” that target hotels and other public spaces such as coffee shops and airports. Whereas hijacking a public WiFi or cellular network was once time consuming, complex, exorbitant, big and bulky, the tools of the hacking trade have gotten simpler, cheaper, smaller and morepowerful.  Using inexpensive open source tools and widely available network equipment, even novice hackers can now easily commit man-in-the-middle attacks.  Videos available on YouTube, attracting millions of views, describe the steps needed to accomplish this, with the tools needed to commjack networks now being sold online for nominal cost. With the means so simple and the rewards so rich, it’s no surprise that DarkHotel have taken off.  Where does this leave business executives who have sensitive data to protect? Fighting Back: Security Strategies to Help Executive Travelers Stay in the Game There are various common sense strategies that executive travelers can adopt to safeguard their mobile devices. All devices should be equipped with anti-malware and anti-virus solutions and include password protection, encryption, data backup and remote data wipe capabilities. Other smart protective measures include using a VPN or IPSec and paying attention to SSL certificates when conductingsensitive, online activity. Multi-factor authentication with one-time use tokens are a good safeguard and users should always delete saved public networks. It’s also important that travelers double-check update alerts that pop up on their computer during hotel stays. Enterprise IT departments can also play a role in ensuring digital security.  Executives should outline their travel plans to their IT personnel, who may have access to intelligence on cyber threats.  Security professionals can also check devices upon the executive’s return for signs of hacking, and implement training to help executives minimize security risks while traveling. Still, the above strategies can only do so much absent safe WiFi and cellular connectivity.  Fortunately, enterprises can also take steps to secure the network used by executives who are on the road.  To accomplish this, companies need comprehensive network protection equivalent to a corporate Firewalls/IPS/NAS.  Enterprise IT departments canpurchase and deploy such solutions that operate in conjunction with existing anti-malware solutions.  Telcos and MSSPs are increasingly doing the same to provide network-level security on top of their core business services, software installation and maintenance. Using monitoring solutions available on the market, users can install a software agent on their mobile devices to detect malicious networks in real-time and prevent devices from connecting to compromised hotspots.  Such security packages can deliver real-time threat maps and enable companies to plan their response protocols.  Enterprise security solutions are also available to protect against remote-based commjacking, where hackers remotely take control over routers and cellular base stations to access voice and data traffic. Help for Those Who Help Themselves Many of the original DarkHotel techniques remain in use today, with the addition of some new strategies.  Like bedbugs, hackers are evolving alongside the strategiesdesigned thwart them.  Fortunately, while the risks posed to business travelers by DarkHotel are alarming, it is possible to secure data and prevent potentially astronomical losses to corporate data, assets and IP.  What can’t be mitigated are the risks posed by inaction, where business travelers and their companies simply hope for the best and cross their fingers that hackers won’t hit them.  In today’s mobile-first world, executive travelers who haven’t been hit already probably will be soon. About Dror Liwer Chief Security Officer, Dror is the co-founder and Chief Security Officer of Coronet. He has extensive management, business development and technological experience building and leading technology-centric, client-facing organizations. As a senior technology executive, he has a proven track record of building organizations, motivating teams, and working with senior non-technology executives, applying his unique blend of strategic direction-setting and tactical executioncapabilities.
March 30, 2016 The kerfuffle over naming of vulnerabilities like Badlock and ShellShock misses the mark on why this is a good thing for the industry. Given the sheer volume and scale of the application security problem companies face today, anything that draws attention to the seriousness of the state we’re in is a good thing. I’d argue that the moniker ‘Heartbleed’ created so much buzz that it forced companies to evaluate their own exposure because Boards and senior management had heard of it and were asking. Would the same be true if it were simply known as CVE-2014-0160? Of course, we don’t want to take this so far that the power of the naming gets oversaturated, like your favorite song on heavy radio rotation. It is almost impossible to comprehend why application security isn’t getting more attention. In 2014 alone, there were eight major breaches through the application layer, resulting in more than 450 million personal or financial records stolen. And we aren’t talking aboutsmall breaches at companies no one has heard of. Target, JPMorgan Chase, Community Health and TalkTalk are four examples of companies that have suffered breaches due to vulnerabilities in software. With such high-profile breaches, you would think more people would be paying attention. And if naming serious vulnerabilities in a memorable way helps achieve this then that’s a benefit for the whole industry. Chris Wysopal, CTO, Veracode is a leader in securing web, mobile and third-party applications for the world’s largest global enterprises.  By enabling organizations to rapidly identify and remediate application-layer threats before cyberattackers can exploit them, Veracode helps enterprises speed their innovations to market – without compromising security.Veracode’s powerful cloud-based platform, deep security expertise and systematic, policy-based approach provide enterprises with a simpler and more scalable way to reduce application-layer risk across their global softwareinfrastructures.Veracode serves hundreds of customers across a wide range of industries, including nearly one-third of the Fortune 100, three of the top four U.S. commercial banks and more than 20 of Forbes’ 100 Most Valuable Brands. Comments are closed
March 30, 2016 Major web browsers are to consider blocking the cryptographic hash function Secure Hash Algorithm (SHA)-1 from as early as June this year as it becomes increasingly vulnerable to forgery attacks. In light of this Oscar Arean, technical operations manager of disaster recovery provider , advises businesses to act now in order to protect customer data. The SHA algorithm was developed by the US National Institute of Standards and Technology (NIST) to be used when digitally signing signatures. In effect, it acts as a ‘fingerprint’ making it easy to tell if a document has been modified. Until recently, many believed the complex algorithm would be immune from hackers due to the significant costs of attacking SHA-1. However, with the advent of increasingly affordable cloud computing, this figure has dropped drastically, as Arean explains: “Around three years ago, researchers estimated that a practical attack against SHA-1 would cost around $700,000 using commercial cloudcomputing services. But recently researchers estimated that this could cost between  renting the Amazon EC2 cloud platform – well within the reach of the cyber criminal’s budget. Because of the increased danger of malicious tampering with SHA-1 encrypted documents, Google, Microsoft and Mozilla have decided to stop trusting SHA-1 through their respective web browsers, with actions potentially being taken to block access by as early as this summer (June 2016). “This will obviously have a big impact on those businesses still using SHA-1. Some websites’ password verification, proof-of-work and message integrity processes are still based on the SHA-1 algorithm, meaning that sensitive customer information is at significant risk from dangerous cyber-attacks. Moreover, with the major web browsers snubbing SHA-1 certificates, potential visitors would be blocked or refused access if trying to visit a SHA-1 encrypted site. The results are thus either a breakdown of trust from a website’s users,or a complete lack of traffic due to incompatibility with modern browsers. Clearly, both are severely damaging to any website’s business and so website managers need to act quickly to ensure their encryption methods are up to date, secure and trusted by both consumers and web browsers.” Thankfully, Arean explains, upgrading SHA-1 to SHA-256 can alleviate these security and compatibility concerns. The process can be straightforward as well, and rests upon working with a strong certificate provider and educating a user base about these changes: “Organisations looking to upgrade their website’s encryption services need only to contact their certificate provider and purchase the SHA-256 certification. That’s really it – the provider can make the necessary encryption changes and sign off, as an independent third party, that your site’s hashing algorithm is legitimate. “When educating your users about this change, the situation can become more complicated. Old browsers or operating systems,such as Windows XP SP2, do not support SHA-2. As such, websites need to be clear that after the upgrade, users will need to use new browsers, such as Firefox, which are still compatible with their hardware while supporting the secure SHA-256.” Arean concluded: “Websites that are yet to upgrade to the SHA-256 model need to act quickly – a failure to move away from SHA-1 could mean the end for sites using the now insecure hashing algorithm. It’s imperative businesses action this now by making the necessary upgrades.” About Databarracks provides ultra-secure, award winning Disaster Recovery, Backup and Infrastructure services from UK-based, ex-military data centres. Databarracks is certified by the Cloud Industry Forum, ISO 27001 certified for Information Security and has been named as a “Niche Player” in Gartner’s 2015 Magic Quadrant for DRaaS. Comments are closed
March 29, 2016 USB Thief, a new threat to data, is capable of stealthy attacks against air-gapped systems and also well protected against detection and reverse-engineering.  researchers have discovered a new data-stealing Trojan malware, detected by ESET as Win32/PSW.Stealer.NAI and dubbed USB Thief. This malware exclusively uses USB devices for propagation, without leaving any evidence on the compromised computer. Its creators have also employed special mechanisms to protect the malware from being reproduced or copied, which makes it even harder to detect and analyze. “It seems that this malware was created for targeted attacks on systems isolated from the internet,” comments Tomáš Gardoň, ESET Malware Analyst. The fact that USB Thief is run from a USB removable device means that it leaves no traces, and thus, victims don’t even notice that their data were stolen. Another feature – and one that makes USB Thief unusual – is that it is bound to a single USB device which prevents it fromleaking from the target systems. On top of all that, USB Thief has sophisticated implementation of multi-staged encryption that is also bound to features of the USB device hosting it. That makes USB Thief very difficult to detect and analyze. USB Thief can be stored as a plugin source of portable applications or as just a library – DLL – used by the portable application. Therefore, whenever such an application is executed, the malware will also be run in the background. “This is not a very common way to trick users, but very dangerous. People should understand the risks associated with USB storage devices obtained from sources that may not be trustworthy,” warns Tomáš Gardoň. Additional details about the USB Thief Trojan can be found  with Tomáš Gardoň or in a  on ESET’s official IT security blog, WeLiveSecurity.com. About ESET Since 1987,   has been developing    security software that now helps over 100 million users to Enjoy Safer Technology. Its broad security product portfoliocovers all popular platforms and provides businesses and consumers around the world with the perfect balance of performance and proactive protection. The company has a global sales network covering 180 countries, and regional offices in Bratislava, San Diego, Singapore and Buenos Aires. Comments are closed
March 29, 2016 Over the last decade we’ve seen a significant increase in mobile technology and it is now becoming the heart of customer experience; forcing retailers to figure out how the digital and physical relationships can work together. Retailers must now decide whether to equip their personnel with mobile devices, introduce more self-service kiosks or expand mobile technology even further; all in the aid of delivering a personalised approach and improving the in-store experience for shoppers. So how has mobility become so important and where it will need to go to meet the expectations of consumers? Rise in mobility It is considered by the end of 2016 more consumers will be browsing on mobile devices than on traditional computers for the very first time. This trend has greatly increased since smartphones first appeared ten years ago and has encouraged consumers to expect the same level of engagement from their retailers. Some retailers have taken this on board, resulting in a riseof instore mobility, but most haven’t. Leaving customers wanting more; a recent study[1] found 93 per cent of consumers would like to see more stores using instore mobile technology, highlighting its lack of uptake so far. Impact on customer experience So far the rise of mobility has seen a significant impact on customer experience. 73 per cent of consumers feel retailers which offer instore mobile technology provide superior customer service, with a further 64 per cent more likely to shop at a retailer which provided instore mobile technology. This highlights how increasing mobility in store is having a positive impact on customer experience; which will soon result in increased satisfaction for shoppers, eventually driving sales. What the future holds As highlighted, one element of the future which is guaranteed is that shoppers expect to see more retailers using instore mobile technology. However retailers must understand the type of technology to implement and consider whatrequirements shoppers of the future will have. 65 per cent of consumers are keen to see instore mobile technology that can order online if a product is not available. This is an interesting reverse to what most consider as the normal omnichannel approach of ordering online and collecting instore. 63 per cent of consumers have also stated they prefer mobile point of sale (PoS) compared to a traditional cashier checkout, with a further 72 per cent preferring mobile PoS as it offers faster checkout times or no queues. When considering these shopper expectations it is clear to see mobility has made a strong impact on customer experience and will be at its heart going forward. Retailers must now take these facts on board and plan a future mobility strategy to meet the expectations of the next generation of customer. About Nassar Hussain, Managing Director for Europe and South Africa at SOTI is the world's most trusted provider of Enterprise Mobility Management (EMM) solutions, with morethan 15,000 enterprise customers and millions of devices managed worldwide. SOTI's innovative portfolio of solutions and services provide the tools organizations need to truly mobilize their operations and optimize their mobility investments. Comments are closed
March 27, 2016 Security procedures are vital in many areas of every day life. Across the globe, busy airports ensure crew and passengers alike go through thorough and strict security checks. This may be time-consuming and inconvenient but is absolutely necessary to ensure passenger safety and the consequences of skipping such processes have the potential to be extremely dangerous. Similarly, when you log on to your online banking account, you may have to enter one or more security codes and PIN numbers to be granted access, which can be frustrating when you’re in a hurry but it is monumentally important to prevent your data getting into the hands of someone else.  It’s evident that security procedures may seem inconvenient in consumer’s day-to-day lives, but how does this reflect into their professional world? The sheer level of valuable and perhaps sensitive information a business holds means that the security measures organisations put in place are likely to be strict and sometimestime-intensive. In line with this, as employees increasingly access both company and personal data on the same devices, these processes need to be implemented in order to ensure employees at every level are doing all they can to keep company data secure. However, employees don’t particularly want to spend time going through such strict processes. So, what businesses need to consider is whether they are making security processes too complicated for employees to adhere to day-to-day? Freedom vs. Security Employees want the same freedom as consumers. They want to work from mobile devices, from anywhere, at any time. In the same breath however, they still need to do this at a level of security suitable for the business. Consumers may have one password for all online accounts, just because it’s easier to remember. Or they may simply shun online services requiring two-factor authentication, such as online banking, as it takes too much time. The trouble is, if employees have this lax attitudeto security on their work devices, they may be opening your business up to all sorts of risks. BYOD and the ever-growing mobile office must become a top priority. The right employees must have access to the right sources at the right time, whether they’re on the move or in the office. This means that ensuring there is the correct access management strategy in place to cope with a mobile office is imperative. The rise of the data breach The consequences of employees being the weakest security link are becoming increasingly severe. There have been many developments concerning the issue of data security over recent years. In fact, until recently, information management was something only larger businesses thought about. However, over the past twelve months in particular, the issue has been thrust to the front of all CIOs minds as attitudes towards data protection have changed. The most recent update of the General Data Protection Regulations (GDPR), leading to the biggest overhaul ofregulation in the last twenty years, coupled with several high-profile data breaches including those of Ashley Madison, Hilton Hotels and WHSmith, reinforces the fact that, with the ICO watching, businesses must be more prepared than ever to secure and protect sensitive information – and it doesn’t have to be too complicated either. When staring down the barrel of a data breach, it isn’t necessarily the breach itself that could upend a business. Now, with these new measures in place, it’s the possibility of being fined up to four per cent of global turnover by the ICO, as well as the almost guaranteed negative press coverage hitting a company’s reputation, thus damaging its relationship with its customers. These risks aren’t something that enterprises should be taking lightly. Streamlined, simple and secure Employees are still the weakest link when it comes to information management, so rather than implementing complex security measures that discourage workers, security needs to be asuser friendly as possible. For example, advising employees to use stronger passwords and change them more frequently does not solve the problem and may not be physically possible when employees have five or more passwords. Organisations need to adopt a solution that completely removes the majority of user function – not doing so encourages employees’ to get around processes and put your organisation at risk. Companies with data in the cloud should implement an IAM solution as soon as possible in order to get access under control and ensure employees aren’t discouraged by complex security measures. Forrester Research estimates this type of solution will reduce your organisations threat surface by 75 per cent[1]. A solution such as this allows employees to easily access apps and programmes whilst keeping business data secure, it removes the human error element and is quicker and more convenient for employees to adhere to. Another simple way to address the issue of security within anorganisation is to teach staff about the security issues that face the business. By being more aware of the potential threats, staff are more likely to take security procedures seriously and perhaps notice if something doesn’t seem secure. Comments are closed
March 23, 2016 Cybersecurity remains a key concern and a real threat to many businesses. As a recent study of 150 board members in the UK , the estimated average cost of lost data over one year could amount to as much as £1.2 million. Yet there still remains a lack of boardroom governance across the UK’s major industries. It prompts the question as to whether there are other aspects around security and critical infrastructure that are being overlooked by UK boardrooms, which could also result in significant financial loss if ignored. Protecting buildings and assets, communications and data systems, marine and transport equipment and power and water sites, to name but a few, against damage is crucial to a business operations. A power cut for any length of time for example can have substantial impact; stopping productivity, allowing unauthorised individuals to enter systems and sites and risks a business’ reputation both with its customers and within their industry. Having a contingencyplan in place if power is lost can reduce, if not eradicate in some cases, the risk to business security and keeps everything operational. Take, for example, a building, plant or site, if the power is deliberately tampered with and security fences, alarms and CCTV cameras deactivated, a business is at risk of people entering their property, staff not being notified and the crime not being recorded. Businesses need to review how many power cuts they have experienced in the past year alone, and the cost to the business each time this happens. They could be looking at thousands, if not hundreds of thousands of pounds for large companies, and when this is added up, the total cost over the year can be astronomical. In addition, individual components will be damaged as a result of a power spike, surge or dip, leading to costly repairs. With the need for more focus on resilience and business continuity comes the need to invest in technology that can provide the robust reassurance businessesnow seek. In the event of a power cut, an uninterruptible power supply (UPS) system will switch seamlessly to backup batteries without interrupting the power, ensuring there is no disruption to normal service. On restoration of a company’s power, the system automatically switches back to mains power and begins to re-charge the batteries. In addition, extra security can be added to UPS’ in the way of keys and passwords, giving further peace of mind to businesses that their critical infrastructure is protected The fact is that a business’ reliance on power cannot be underestimated and 100% uptime is now demanded; there’s no room for failure and the security risks are too high. All areas of a business’ security need to be higher up the boardroom agenda as the environment is constantly changing and the risks and costs to the business increase. With growing emphasis on cybersecurity, the wider security landscape can’t be forgotten. What it comes down to, is that if left too late, theconsequences can be almost too much to think about.  Scott Billson, Senior Sales and Marketing Manager,  Comments are closed
March 23, 2016 In an episode of the TV show “Sherlock,” a pair of bad guys die in a crash after a hacker takes complete control of their car. In an episode of “Homeland,” the vice president is assassinated with his own pacemaker when a cyberattacker takes control remotely and stops his heart. On “CSI: Cyber,” a hacker infiltrates a navigation app, directing victims to areas where they get robbed. These scenarios are no longer just the stuff of Hollywood writers’ overimagination. As our lives become increasingly digitized and connected through the Internet of Things (IoT), those kinds of hacks are becoming more and more plausible. Especially with Gartner estimating the number of connected devices in the consumer and business sectors to reach — and many of those devices not necessarily being designed with security in mind. But even more troubling is the reality of attacks on critical public infrastructure — the possibility of a hacker disabling a city’s entire 911 system or plunging anentire region into darkness by taking out the power grid. As former U.S. Secretary of Defense Leon Panetta has been frequently quoted, “The most destructive scenarios involve cyber actors launching several attacks on our critical infrastructure at one time, in combination with a physical attack on our country.” Combined with the disabling of critical military systems and communication networks, these kinds of actions would result in what . Security experts have warned that several state actors have the capability of compromising U.S. critical infrastructure — including the Islamic State in Iraq and Syria (ISIS), which reportedly is . Public infrastructure an increased target The , part of U.S. Department of Homeland Security, responded to 295 incidents related to critical infrastructure in fiscal year 2015, 50 more incidents than the previous year. Many incidents go unreported, ICS-CERT said. Even if the number seems small compared to data breaches in the private sector, the potentialconsequences are far more devastating. According TrendMicro’s 2015 “ ” of the 575 respondents — heads of security and CIOs of major critical infrastructure from 26 members of the Organization of American States — 43 percent indicated they had experienced an attack while 31 percent weren’t sure. And about half of the respondents noted an increase in computer systems incidents from the previous year, with another 40 noting steady levels. In another 2015 from around the globe, the Aspen Institute and Intel Security found that nearly half of the respondents thought it was either likely or extremely likely that “a successful cyberattack will take down critical infrastructure and cause loss of human life within the next three years.” Respondents in the United States were more concerned than those in Europe. Just the last few months saw several critical-infrastructure attacks around the world. In December, about 225,000 customers of several Ukrainian power companies lost power for hours. ,and Russian hackers were blamed. More recently, via a phishing attack. Although the grid itself wasn’t afffected, this was yet another example of a particularly vile type of attack. And as we saw in February when , this kind of threat may not only cost organizations a lot of money but could also completely cripple critical operations — in this case, access to patient data and ability to perform tasks that impact patient health, such as lab work and scans. The NSA’s director that several governments have already breached energy, water and fuel-distribution systems in the United States. One known incident that surfaced last year was by Iranian cybercriminals in 2013. ‘Detection and response’ as the new normal Various security experts expect to see . Both Symantec and McAfee listed this among their , with McAfee noting a new trend of cybercriminals selling direct access to critical infrastructure systems. According to McAfee’s survey, 76 percent of respondents think those threats areescalating, while . Nation-state actors are likely to be the culprits. CrowdStrike’s also predicts that in 2016, specific nation-state actors will likely target agriculture, healthcare and alternative energy sectors “not just for intellectual property, but also for know-how such as building native supply chains and administrative expertise.” The ramifications of the security incidents on critical infrastructure don’t just include disruption of critical operations and critical business applications. An ESG survey found that 32 percent of organizations also of confidential information. The fallout for an organization may lead to increased regulatory scrutiny and government penalties because of laws such as . Many of the attacks happen because of the lack of analytical security systems. In a SANS Institute survey of critical infrastructure organizations, less than a third felt they had excellent or very good visibility into their networks’ threats while 40 percent rated their visibilityas OK, poor or very poor. Traditional, signature-based security solutions no longer hold up to today’s sophisticated threats, especially as more data moves to the cloud. That means organizations needs to get serious about advanced analytical systems that can correlate various processes and policies — and help provide the kind of detection and response that antimalware and other single-layer technologies simply can’t handle. The increased targeting of critical infrastructure should be a wake-up call. It’s only a matter of time before a disastrous attack wreaks havoc. Organizations need to up the ante on their cybersecurity and shift the focus on detecting all security breaches and bringing situational awareness to incidents — especially those that may pose incredible harm. About Sekhar Sarukkai Sekhar Sarukkai is a co-founder and the chief scientist at  , driving the future of innovation and technology. He has more than 20 years of experience in enterprise networking, security and cloudservices development.
March 23, 2016 Denial of service attacks are so common now that “DoS attack” hardly needs explanation, even to the lay person. The phrase “DoS attack” instantly conjures images of banking sites that refuse to load, and gaming consoles unable to connect. The other instant reaction is to think of the attackers such as , the , or the . However, not all denial-of-service is the product of a coordinated attack. Many forms of DoS are organic by-products of completely normal traffic. So-called “normal traffic” includes everything from legitimate customers, business partners, search-index bots,data-mining scraper-bots, and other more malicious automated traffic. As we know, anywhere from 40- 70 percent of any given web site’s traffic is automated traffic. Combined with often unpredictable surges in legitimate user traffic, maintaining the availability of any Internet-based service is daunting. This brings up a topic of frequent debate. Who should be responsible for managing availability—thesecurity team or the infrastructure and application development teams?  The security triad of “confidentiality, integrity, and availability” (CIA) dictates that security practitioners work to ensure availability. The scope of this duty extends beyond availability issues caused by malicious attacks. Attackers regularly perform reconnaissance to identify vulnerabilities in availability. These vulnerabilities range from capacity of ISP links and firewall performance, to DNS server availability and application performance. Sizing ISP links and firewall throughput are well-understood and easily quantified aspects of availability planning. The latter areas of DNS capacity and application performance are oft-overlooked areas of application security. Application security practices are maturing to address remediating OWASP Top 10 vulnerabilities such as injections, scripting, or poor authentication and authorization handling. However, many application security scans do not include identifyingprocessor-intensive and bandwidth-intensive URLs, as these aspects of application performance monitoring (APM) might be seen as the sole responsibility of the application development and/or server administration teams. After all, it’s their job to ensure the code is optimized and the server capacity is available, or is it? Unfortunately, while server infrastructures are more elastic thanks to virtualization and applications are often built to take advantage of that compute power, without proper monitoring and regular scanning weaknesses in application capacity can quickly lead to serious outages. A single underperforming URL or other web application widget can affect the load of an entire server or farm of servers. Further, application dependencies can cause more serious race conditions, leading to widespread impact. Proactively scanning the web applications to identify underperforming URLs not exposed in software QA or user acceptance testing enables the security team to addadditional protections to heavy or processor-intensive URLs. These protections range from additional log and alert thresholds to more aggressive bot detection and dynamic traffic throttling.  Without such preventative measures, a marketing campaign, Cyber Monday, or an eventful news day can cause denial of service conditions unrelated to any malicious attack patterns. Many, if not most, traditional security measures are derived from understanding the normal state of traffic and then identifying anomalous patterns. This methodology is implemented in everything from IP address blacklisting and whitelisting, attack signature checking, SYN flood detection, and source/destination ACL’s. However, these methods fall short when the cause of DoS is rooted in well-formatted requests for legitimate services.  Since the majority of traffic on Internet-facing web sites is automated, filtering out malicious or illegitimate automated traffic offers protection resource-intensive features of the webapplication. Profiling web applications for resource-intensive components–similar to the approach of attackers—also provides additional insight. Gaining insights into fragile application components enables more effective monitoring, resulting in increased server response times. These can be used as metrics for more dynamic response to potential L7 DoS conditions.  Security and availability are intrinsically linked. Leveraging components of the infrastructure such as application delivery controllers (ADCs), application performance monitoring (APM) solutions, and other availability tools is vital to a comprehensive security practice. Even if these solutions might not have security, threat, or firewall in the product name.   About Brian A. McHenry As a Senior Security Solutions Architect at , Brian McHenry focuses on web application and network security. McHenry acts as a liaison between customers, the F5 sales team, and the F5 product teams, providing a hands-on, real-world perspective.Prior to joining F5 in 2008, McHenry, a self-described “IT generalist”, held leadership positions within a variety of technology organizations, ranging from startups to major financial services firms. Twitter: 
March 22, 2016 I sraeli software researchers have found a way to , previously found to leave millions of devices susceptible to cyber attacks. Stagefright was originally described as ‘the worst Android bug ever discovered’, however the exploit – dubbed ‘Metaphor’ by its creators – marks the first time the vulnerability has been compromised in the operating environment. According to Jan Vidar Krey, head of development at Norwegian security specialists , Android’s inconsistent patching and system updates leave far too much to chance, inviting cyber attackers to try their hand at executing malware on foreign devices: “Although Google released security patches for Stagefright vulnerabilities, not every Android phone and tablet can receive and install them, leaving a large number of devices vulnerable. Metaphor, however, is an appropriate name for the flaw, which can be viewed as being representative of Android’s history of shoddy security: heterogeneous but woefully predictable.”Stagefright 2.0, a second critical exploit discovered by the researchers, was found to exploit weaknesses in .mp3 and .mp4 files and remotely execute malicious code. Krey commented: “It’s not a surprise that the Stagefright vulnerability is back in the news. When it was revealed, there was : patches and updates were only available for recent models. The first hack could impact up to 95 per cent of devices, so manufacturers’ failure to address the flaw in the six months that passed since its discovery is a huge oversight. Sadly, consumers will ultimately pay the price. Krey advised: “Android’s operating system is currently the security equivalent of shark-infested water, and the only way to guarantee secure processes is to ensure your app is completely protected. When you’re hosting sensitive information on applications, these threats pose a real concern. Instead, apps must be self-defending and able to identify malware as and when it appears. Until Android is able to straighten out itsOS and stop leaning on dodgy patches, base layer app security must be upheld as the crux of a device’s security. Whether Android alone will ever be able to offer a safe environment to carry out transactions is yet to be seen, but I wouldn’t bank on it.” About Promon Traditional security systems such as antivirus, antispam and antimalware are outdated and no longer able to protect companies and users against security threats and cyber-crime. provides full protection for applications against existing and new malware threats. Promon's patented method for detecting and blocking security threats against applications enables self-protected apps allowing users risk-free utilisation of a potentially unprotected computer, tablet or mobile telephone. Promon is a global operating company with its head office in Oslo. Comments are closed
March 22, 2016 Check Point has revealed the most common malware families being used to attack organizations’ networks and mobile devices globally in February 2016.   For the first time, malware targeting mobiles was one of the top 10 most prevalent attack types, with the previously-unknown HummingBad agent being the seventh most common malware detected targeting corporate networks and devices.  Discovered by Check Point researchers, Hummingbad targets Android devices, establishing a persistent rootkit, installs fraudulent apps and enabling malicious activity such as installing a key-logger, stealing credentials and bypassing encrypted email containers used by enterprises, with the aim of intercepting corporate data.  Check Point identified more than 1,400 different malware families during February.  For the second month running, the Conficker, Sality, and Dorkbot families were the three most commonly used malware variants, collectively accounting for 39% of all attacks globally inFebruary. 1.       ↔ Conficker – accounted for 25% of all recognized attacks, machines infected by Conficker are controlled by a botnet.  It also disables security services, leaving computers even more vulnerable to other infections. 2.       ↑Sality – Virus that allows remote operations and downloads of additional malware to infected systems by its operator. Its main goal is to persist in a system and provide means for remote control and installing further malware. 3.       ↑Dorkbot – IRC-based Worm designed to allow remote code execution by its operator, as well as download additional malware to the infected system, with the primary motivation being to steal sensitive information and launch denial-of-service attacks. Check Point’s research also revealed the most prevalent mobile malware during February 2016, and once again attacks against Android devices were significantly more common than iOS. The top three mobile malware families were: 1.       ↑ Hummingbad – Android malware thatestablishes a persistent rootkit on the device, installs fraudulent applications, and enables additional malicious activity such as installing a key-logger, stealing credentials and bypassing encrypted email containers used by enterprises. 2.       ↓ AndroRAT – Malware that is able to pack itself with a legitimate mobile application and install without the user’s knowledge, allowing a hacker full remote control of an Android device. 3.       ↓ Xinyin – Observed as a Trojan-Clicker that performs Click Fraud on Chinese ad sites. Nathan Shuchami, Head of Threat Prevention at Check Point said:  “The rapid rise in attacks using Hummingbad highlights the real and present danger posed to business networks by unsecured mobile devices and the malware that targets them.  Organisations must start to protect their mobile devices with the same robust security as traditional PCs and networks as a matter of urgency.  With the range of attack vectors open to hackers, adopting a holistic approach tosecurity that includes mobile devices is critical in protecting both corporate networks and sensitive business data.”  Check Point’s threat index is based on threat intelligence drawn from its , which tracks how and where cyberattacks are taking place worldwide in real time.  The Threat Map is powered by , the largest collaborative network to fight cybercrime which delivers threat data and attack trends from a global network of threat sensors.  The ThreatCloud database holds over 250 million addresses analyzed for bot discovery, over 11 million malware signatures and over 5.5 million infected websites, and identifies millions of malware types daily.    About Check Point Worldwide Leader in Securing the Future Since 1993, has been dedicated to providing customers with uncompromised protection against all types of threats, reducing security complexity and lowering total cost of ownership. We are committed to staying focused on customer needs and developing solutions that redefine thesecurity landscape today and in the future. Comments are closed
March 21, 2016 More Than Half of Survey Respondents Believe Digital Currency is the Future; Consumers Throw Caution to the Wind on Security for their Work and Personal Email Accounts  IEEE, the world’s largest professional organization dedicated to advancing technology for humanity, today announced the findings of an online survey that detail more than 1,900 technology enthusiasts’ views on digital safety and the future of cybersecurity. According to the results, when asked what year mobile payments would be secure enough to the point where traditional methods (such as cash and credit cards) would no longer be required, 70 percent of respondents indicated a major shift by 2030. The survey results also found, on a scale from 1-5 (1 being least concerned to 5 being most concerned), a similar percentage between the lack of concern regarding the security of work email (50 percent) and personal email (49 percent) accounts, which is surprising given that there is no dedicated IT departmentto monitor and protect personal email as there is for a work-affiliated account “Now more than ever, cybersecurity is a necessary safeguard to our digital lives, which hosts a variety of our private and personal information,” stated Diogo Monica, IEEE member and security lead at Docker. “Cyberattacks can now unfortunately happen in nearly every element of our lives, such as our car, connected home and wearable devices. Whether it’s putting more reliance in digital systems for our currency or trusting that our email accounts are secure, we need to be cognizant and take the necessary precautions to protect our digital footprint.” Consumers No Longer on Cloud Nine More than one quarter (26 percent) of participants also noted that the cloud was the least preferred method for storing their information; 49 percent of respondents chose personal computer log as their primary option. Respondents did have concerns regarding other considerations to their digital footprint. When asked on a scalefrom 1-5 (1 being riskiest to 5 least risky) about their personal information being available on certain platforms, respondents believed that online banking (72 percent), syncing to the cloud (53 percent) and banking/mortgage information (60 percent) were extremely risky, indicating a 1 or 2 for each. “There is a stigma attached to the term “cybersecurity” and “hacker,” due in large part to personal and corporate attacks, but there is so much opportunity and growth available in the cybersecurity industry,” stated David Brumley, IEEE member and director of CyLab at Carnegie Mellon University. “Initiatives such as ‘Hacking for Good’ can not only provide tools and a career path for students, but it can help change the perception that a “hacker” isn’t representative of the field as a whole. Responsibly encouraging and developing the next generation of cybersecurity personnel is needed to ensure we are protected in the future.” Internet Starts with “I” – Managing Your Digital Home There isa level of sophistication among respondents who monitor their home Internet activity. According to the results, 22 percent of respondents have automated alerts set up for any attempted connectivity, 11 percent utilize visualized monitoring in real-time and 3 percent connect to a cloud monitoring system. When asked what would be most affected by the continued developments of cybersecurity, participants noted identity theft (42 percent), followed by online anonymity (27 percent), piracy (18 percent) and viruses (12 percent). About the Survey IEEE hosted an online survey on IEEE Transmitter, which was hosted from February 16 – March 29. The survey asked participants who are actively engaged in technology trends a variety of questions regarding their digital comfort level as well as what the future might hold for the future of cybersecurity. The total number of survey respondents garnered was 1,903. Full survey results can be found by visiting IEEE Transmitter. About IEEE is the world'slargest professional association dedicated to advancing technological innovation and excellence for the benefit of humanity. IEEE and its members inspire a global community through IEEE's highly cited publications, conferences, technology standards, and professional and educational activities. IEEE, pronounced "Eye-triple-E," stands for the Institute of Electrical and Electronics Engineers. The association is chartered under this name and it is the full legal name. Comments are closed
March 18, 2016 Gone are the days when companies only had to worry about valuable documents leaving the building in a pocket or briefcase. Today, sensitive and proprietary information can move across networks in digital format – and even be plucked out of these networks from the sky. The need for intrusion detection has expanded beyond the front door of your building to the network and now, thanks to advances in drone technology, the airspace above. Chances are good you have a drone or someone you know has a drone – they’re amongst the fastest growing technologies available. In just two years, the worldwide market for consumer drones has experienced a 167% increase in sales, according to marketing and investment firm Kleiner Perkins Caufield & Byers (KPCB). World drone sales are estimated to have hit 4.3 million units, and KPCB estimates the market to be worth about $1.7 billion. Drones are used for a variety of purposes, ranging from farmers checking on livestock to utility companiesinspecting power lines and videographers using them to film weddings. As with any technology, as drones become more advanced, they also become more accessible. For example, a drone that can carry up to 11 pounds and fly over a mile can be purchased on the Internet for less than $2,000. As the price continues to go down, we can reasonably expect the risk they pose to increase. The security risks posed by drones Drones link physical and cyber security by making it possible to transport snooping devices within close proximity of data centers and networks. What’s more, using a GPS and autopilot, many drones can fly a programmed route without a pilot. This means an attacker can be in a completely different location from the crime scene. This isn’t just a hypothetical. In 2015 the security industry witnessed several examples of how drones can be used to steal sensitive data: Security firm SensePost introduced its Snoopy drone, which is designed to hack smartphones and steal data. AerialAssault’s David Jordan introduced a drone designed to penetrate test networks and collect unencrypted data. Student researchers in Singapore developed software that can identify open Wi-Fi printers and then establish a fake access point to intercept documents. The software can be loaded onto a smartphone that is attached to a drone. It won’t be long before cybercriminals add drones to their arsenal. So how can organizations protect their data? The sophistication of drone technology requires a new kind of intrusion detection system. The drone detection system Drones vary in size, speed and shape, which makes it difficult to detect them via any single monitoring method. For example, audio detection would fail to recognize silent drones like gliders or fixed-wing drones. Cameras are unable to detect all shape-changed drones, such as those designed to look like birds. Even radar, which is traditionally used in the detection of aerial vehicles, must be modified to effectively detect drones.The best solution uses a drone detection system that incorporates multiple mechanisms, or sensors, to detect and identify drones in real time based on signatures. The cloud-based network of sensors helps ensure accuracy under varying conditions and reduces false alarms. You can think of it as an intrusion detection system for the sky. By analyzing characteristics like flying behavior and silhouette and neural network classification of the cross-section, the system can determine whether the entity flying through the air space is a drone or, for example, a bird. Drone detection systems are still young, but vendors are working hard to advance the technology. For example, organizations can expect drone detection systems to integrate with their network-based intrusion detection and prevention systems, physical security dashboards. Also radar and other long-range technologies will enhance drone detection technology. Interdiction and countermeasures, though still early in the game areprogressing as well. This is a tricky challenge due to a variety of legal uncertainties. Shooting down a drone or interfering with the radio and GPS signals could result in an out-of-control drone that causes property damage or – worse, yet – physical harm to those in the vicinity. In the meantime, security staff can take safety measures offline, such as leading people to safety, blocking the view, locking doors and gates, searching the site for dropped objects and searching for the pilot. Alert videos can also serve as evidence and play an important role in helping to identify the culprit. About Jörg Lamprecht CEO, Co-Founder, In 1996, while still studying maths and computer sciences at the University of Kassel, Jörg Lamprecht set up his first company, Only Solutions GmbH, with Rene Seeber and another fellow-student. The software company really lived up to its name: one of the products it developed was the first search engine for pictures on the internet, which was used – among otherthings – to trace missing children. Only Solutions was later renamed Cobion and now belongs to IBM. In 2006, Jörg founded Qitera. In 2011, he discovered the emerging market for drones and responded by founding Aibotix, a company that produces unmanned aircraft for professional use by surveyors and engineers. Aibotix was sold to the Hexagon group from Sweden in February 2014. At Dedrone, Jörg uses his expertise as founder and manager for leading the areas business development, sales and marketing. His special focus is on setting up international partner and distribution networks. Comments are closed
March 17, 2016 The world has been talking about “mobile payments” for years, but the phrase means different things to different people. So what exactly are mobile payments? And how much more mobile than cash or cards can payments actually get? Some people believe that mobile payments are those made using mobile phones. Others, myself included, understand the phrase to mean the most mobile, cash-independent payment method possible—although I consider cash to be more mobile than many other forms of payment. But let’s leave those alone for the moment. The second most mobile payment type is the credit card: electricity doesn’t always work, and as Barclaycard demonstrated so effectively in its ad, you can even use your card on a waterslide. Now let’s take a look at the much-lauded smartphone. Of course, people have their smartphones with them most of the time, but, as the majority of business travellers at airports have demonstrated, they will seize any opportunity to extract just one moredrop of power from any source—even the most inaccessible ones—to ensure that their smartphone batteries will survive the day. I’m not convinced that relying on your smartphone (and thus, its battery) for your payment options is a good idea. Ultimately, you’d still have to carry a credit card—or at least a battery charger—to ensure that you weren‘t left high and dry if the worst happened. This begs the question, though: if you’re going to be carrying a credit card anyway, why do you need a smartphone to make payments? Many people, on the other hand, would say: “Ah, but with a smartphone I can make contactless payments”. While this statement is perfectly true, it applies equally to credit cards. When it comes down to it, smartphones are nothing more than a medium for storing credit card data. Any credit card can also be equipped with a contactless function; and unlike a smartphone, it won’t break if it‘s dropped into water or onto a concrete floor. There are also NFC credit card stickerswhich you can, for example, stick onto your mobile phone. Paradoxically, doing this allows you to add mobile payment functions to even the oldest Nokia. In any critical review, it is important to remember that smartphone operating systems incorporate a wide range of functions. By their very nature, therefore, smartphones will always contain security vulnerabilities which can, in extreme cases, even jeopardise the security of your payment data. It’s not just payment data that is at stake here (although a fraudster could use it to go shopping), but all the data pertaining to user purchasing patterns. You’re probably thinking: “What’s wrong with someone seeing what I’m buying?” Now, though, imagine that you were on holiday in the USA and that someone could use your phone to see that you were currently making all your purchases there. Wouldn’t that be the best time for them to burgle your flat? I don’t understand why developers feel compelled to include payment functions in smartphones.Why don’t we just use credit cards? They’re the most widespread, most mobile of all mobile options and—most importantly—they’re the least breakable. Ultimately, it doesn’t matter whether the NFC chip is embedded into a rectangular plastic card, a cufflink, an earring or a sticker; I’ve simply yet to be convinced of the consumer benefits of using a smartphone. The facts are: –          Mobile payment systems’ success is determined by the availability of NFC-capable terminals and the acceptance of credit cards by retailers. –          Local card systems, like, for example, Carte Bleu or Girocard, are becoming less popular among retailers due to the European interchange regulation. Retailers are losing the cost benefits they have hitherto enjoyed, and are being forced to accept more complex payment processes. –          NFC is now effectively the standard. As of 2016, all POS terminals must be NFC-capable. Why, then, haven’t NFC payments taken off in countries outside the UK, for example,Germany? There is, in theory, nothing technical or structural standing in the way of mobile—or even, contactless—payments. Except, of course, the Germans themselves. In a country where frugality is still considered a virtue, customers are unwilling to pay for banking services. Instead, they believe that everything should be free, from the cards themselves to cash machine withdrawals, account maintenance and loans. Nowhere else in the world is “free banking” as ingrained into the culture as it is in Germany. Contactless cards are only slightly more expnsive to produce than standard ones. Banks in Germany tend not to offer them, however, because they’re afraid customers won’t want to pay the difference. For this reason, contactless payments using credit cards are still largely unknown in Germany, whereas they’ve become the norm in countries like the UK and Sweden. Perhaps this is precisely the advantage of making mobile payments via smartphones: adding credit card data to a smartphone ischeaper than adding an NFC chip to every customer’s card. This means, though, that it’s not the consumers who are reaping the benefits of smartphone payments: it’s the card-issuing banks. In future, NFC-capable credit cards will be the standard for mobile payments—even outside the UK. Alternative “mobile” payment systems, which use options like QR codes rather than NFC, will never achieve this level of popularity. It’s not just the technology vendors and the hugely diverse operators of all kinds of “mobile payment types” who are to blame for this state of affairs; instead, it’s fear and lack of knowledge on the part of retailers. It no longer matters whether chips are embedded in smartphones, stickers, or body parts. Instead, convenience and cost will determine which option consumers prefer. About Philipp Nieland Cofounder and Chief Technical Officer Philipp Nieland studied economics and computer science at university and founded PPRO in 2006. He is responsible for the company’sbusiness operations and business development. In 2003, he founded his own consultancy firm and ran the company as CEO. From 2000 to 2003, he was Manager Systems & Applications at Telefónica. Philipp founded his first company, a consultancy firm for internet technology, at the age of 19 while still at university. Building on his entrepreneurial skills and wealth of expertise in information and network technologies, Philipp is committed to driving the growth of PPRO. Comments are closed
March 17, 2016 I clearly remember the first time I saw a computer. Someone was playing a video game called Demo Rush 3 at a church. I remember staring at him, not understanding what he was doing. I couldn’t help but wonder how the game actually worked. This fleeting, early moment ignited a passion in me that was to inspire one of my life’s defining journeys. To relate this story, allow me to go back to the beginning. My father died when I was a young boy, and it was decided early on that my siblings and I would move to a village so that we could live with my auntie and further our education. There were 12 of us living in a two-bedroom house with no running water. But I had other things on my mind. I was determined to stay in school and get an education. Each day, I woke up very early to walk the three kilometers separating my auntie’s house from school. I remember sharp stones poking into my bare feet because I didn’t have any shoes to wear. Despite this fact, or maybe because of it, Ihave fond memories of that period of time in my life. We all loved each other very much. In this supportive environment, I learned the values of hard work, of helping others, and of being resourceful. I wouldn’t have traded any of it for the world. Fast forward a few years. One of my first memories with computers is when I learned to type on a keyboard. I was 15 years old, and I knew that I didn’t have enough money to access a computer at an internet cafe or training center let alone to buy an actual keyboard. A solution came to me when I spotted a box with a picture of a keyboard on it. I cut out the computer keyboard picture and carried it around with me so that I could teach myself how to type. As my high school teacher taught the class to type on real keyboards, I practiced moving my fingers to learn all the letters and symbols across my makeshift cardboard keyboard. It was in high school that my curiosity for computers really took off. In 2005, a high school friend named MichealeOkwii told me that his dad was starting an internet cafe business. When he told me about it, the first thing I wondered was whether I could help out at the cafe even if that meant just being around the computers and not touching them. Micheale told me that he would ask his dad. I was filled with anticipation as to the possibilities. The following day, he came back and told me that I could help them sweep and mop the café each day before school if I wanted to, but that was all I could do. My immediate response was yes! At this point, I didn’t know anything about computers, but I did know that this was an opportunity for me to be around computers, and that was what I wanted most. I was grateful that Micheale’s father was willing to let me volunteer as a cleaner at the internet café. Enthusiastically, I woke up every morning at 4:00 am and walked three kilometers to the café. It took me about 30 minutes to clean, and when I was finished, I headed off to school. After three months ofworking at the café, they allowed me to touch the computers and learn how to properly turn them on and off. Being able to touch the computers was the first step for me, and it was at this point when I knew I was going to be able to start slowly gaining more knowledge about computers. Eventually, as they trusted me more, they gave me the responsibility of turning on the computers each morning after I cleaned. During my school holidays, the internet café was a haven for me. I spent all of my time there cleaning and using the computers to learn as much as I could. I still was not being paid for cleaning, but at this point, I wasn’t concerned with money. What mattered to me most was that I was getting hands-on experience. I used this energy to learn and understand as much as I could about computers. This routine of volunteering and school went on and on until one day I was hired not only to clean the café but also to help customers surf the web and help my friend’s father fix thecomputers. I was paid one USD dollar per day, but I didn’t mind. I had learned so much! By the fourth year of work at the internet café, I knew I had accumulated as much knowledge as I could, so I decided to search for something else that would challenge me further. While I still worked at the café, a Canadian girl named Melissa Meartens asked me for help with her cell phone. She had come to Uganda with her friend Ann to volunteer with a Canadian charity organization. Melissa, Ann, and I fast became good friends. During their two month stay, they noticed that there were a lot of street kids in Jinja wandering around by themselves begging for food. I loved helping street kids, as did Melissa and Ann, so we decided to get to know the kids better by playing soccer and talking with them. We began to build strong, trusting connections with the kids. When Melissa and Ann had to leave for Canada, I chose to continue my relationship with these street kids. As I got to know the kids better, Ilearned some of them wanted desperately to go to school, some were passionate about playing soccer, and some just wanted a safe place to live and food to eat. I stayed in touch with Melissa and kept her updated about the street kids. Together, Melissa and I helped them enroll in school and tried to provide them shelter. Months later, we turned this collaborative effort into a non-profit organization. Melissa helped me connect with the Canadian charity organization that she had been volunteering with during her time in Uganda. I introduced myself and began volunteering with this organization soon thereafter, an experience which ultimately gave me the opportunity to meet more awesome people. Case and point, in my last year of high school, I met a Canadian mother and son, Brenda and Tanu Huff, who were both in Uganda volunteering through charity. Tanu and I quickly became as close as brothers. I desperately wanted to continue my studies after high school yet I didn’t know how I would payfor it. I remember mentioning my desire to go to university to Tanu at one point and how I didn’t know how I could pay for my studies. Little did I know how much he would take my desire to heart. Tanu came up with an amazing idea. During his time in Uganda, he had developed a love for Ugandan music and had asked me to collect all the dance style music that I could for him. He told me that when he returned to Canada, he was going to present the idea of a fundraising dance to his high school so that I could start university. Sure enough, when Tanu went back to Canada, he was able to make his idea happen. In total, he raised $4300–enough for me to start university. The next chapter in my journey happened as I was walking along Main Street in Jinja. I spotted a truck with a logo on it that struck me because I thought it could have been some kind of organization relating to computer technology. I did some research and discovered that I was correct. I knew that I wanted to connect with thisorganization, but I needed to locate that mystery truck. Over the next few days I searched around town and was pleasantly surprised when I found it right there on Main Street. At that point, I decided to wait to see if I could meet the owner of this truck, so I sat nearby and waited. It turns out that the owner of the truck was an American who was living in Uganda and working with computers. After meeting him, I asked if I could volunteer with his organization. He presented me with his business card and told me to get in touch with him if I was serious. I was so excited at this prospect that I sent an email to him that same day letting him know that I was very serious. At the time, he was starting up a computer training centre and welcomed me on as a volunteer. It was at this training centre where I learned about computers in greater depth. I absorbed a lot, including knowledge about computer programming and web development. After a few months of working there, I left Jinja for Kampalato study at Aptech University and pursue a degree in software engineering. Tanu and Brenda Huff’s family/friends did whatever they could to continue to fund my post secondary education. In order to keep fundraising, I began painting pictures for the Huff family so they could raise the remainder of funds for me to finish my degree. So there it was, my dream was coming true. I went through University and graduated with a Software Engineering degree. Afterwards, I decided to move back to Jinja and once again volunteer with the same computer training centre. At that point, I had acquired enough knowledge about computer programming and web development that I became a more effective teacher. (I especially enjoyed teaching youth in Uganda.) Through my connection with the American director of the charity with which I was volunteering, I was able to make some solid international relationships within the hacker community. These connections allowed me to learn more about hacking, to get involvedwith CEH: Certified Ethical Hacking, and to ultimately speak at Derbycon in Louisville, Kentucky in the fall of 2015. This conference was attended by over 3,000 people, each of whom had significant hacking expertise. The experience opened many doors for my career and helped me to see things differently as I discovered new opportunities I had not even considered before. As a result of this experience, I am finalizing my CEH and now realize how much I want to be a Computer Hacker Forensics Investigator (CHFI). I have remained very close with Tanu and his family, and I am currently visiting them in British Columbia. A few years ago, Tanu and I started a Canadian not-for-profit organization to bridge the gaps in our world. I am volunteering my time to code a unique web application for our organization which will launch soon! But that is a story for another time. As I move ahead and look toward to the future, I hope to one day be able to continue to explore my passion for computers andespecially to help the youth in Uganda experience some of what I have learned, especially in the area of coding and programming. I want to work again with the street kids that were there when this all began. I really believe that I can use what I have learned through my education to improve literacy in Uganda and moreover, Africa as a whole. When I look back at where I started, I recognize how fortunate I am to have met the people that I have along the way that allowed me to discover my passion and build a better life as a result. Because of this, I know I need to give back all that I can. Just because you don’t have access to a piece of software, computer or keyboard, doesn’t mean that you can’t learn Infosec. Sometimes there will be no one to help you figure out a command, but you have to problem solve and figure out a way to do it. Study each and every material you can get your hands on; It will pay off. If you work with passion, through the labour of love you will be more motivatedwith how much you get out of your efforts and how much you can accomplish. At some port, I couldn’t compile software but that didn’t stop me from learning. Passion has always been what drives my motivation. I come from a third world country where people don’t know about Infosec, but it never stopped the spark and the courage within me to learn. I believed in myself and that one day I could be part of a global need. It might seem like it is very hard and you might wonder or be unsure of where to start, but don’t give up. Educating yourself is important but success in this industry needs more than just knowledge. Networking A+, Linux operating system knowledge, a degree or any form or qualification in computer science or software engineering will be of so much help but you have to have the passion. About Henry Wanjala Henry Wanjala was born on 14/07/1989, in Jinja in Uganda. He studied in Jinja for both elementary and high school. Henry went to university in Uganda’s Capital, Kampalawhere he graduated with a Degree in Software Engineering. Mr. Wanjala is involved with robotic’s literacy movement in Africa. In February of 2016 he mentored high school students in London Ontario, Canada for the First Robotics Competition in March of 2016.  In September of 2015 he spoke at Derbycon Hacking Conference in Kentucky. Henry is currently coding a web application that enables people to help small-scale initiatives globally that are working to improve social, humanitarian and environmental issues. Comments are closed
March 17, 2016 A hacker gang dubbed Anunak pulled off a high-profile attack against Energobank based in Kazan, the capital of the Republic of Tatarstan, Russia. This breach took place in February 2015, but its details surfaced lately in the respective report by Group-IB, a computer forensics firm hired to look into the incident. The fraudsters managed to deploy the Metel Trojan (the name is a transliterated Russian word for “Blizzard”) in the bank’s IT infrastructure. Also known as Corkow, this malware provided the hackers with unauthorized access to trading system terminals. Over the course of only 14 minutes, the offenders succeeded to conduct currency exchange transactions on behalf of the bank, which resulted in US Dollar/Ruble exchange rate to fluctuate from the regular 60/62 (buy/sell) down to 55/62. Consequently, the criminals were able to carry out multimillion-dollar deals, where interested parties could get quick profit by purchasing dollars cheaper and selling them at theaverage market rate. A total of 7 currency exchange requests were made within this brief time span, amounting to more than $500 million. The malware was then remotely deleted from the trading system. According to financial experts’ estimates, this artificially created temporary margin made the bank suffer losses in the millions. Meanwhile, the central bank admitted the exchange rate volatility but denied the fact of illegal manipulations, stating that the predicament could have resulted from traders’ mistakes. Dmitry Volkov, the cyber crimes investigation division leader at Group-IB, claims the Corkow Trojan is capable of traversing the contaminated Intranet thoroughly enough to even locate remote machines that may handle sensitive financial transactions. Furthermore, the malware in question was found to adopt sophisticated antivirus evasion techniques. It can, therefore, fly under the radar of the mediocre defenses that most of the targeted organizations employ. This feature hasenabled the Anunak criminal gang to create a botnet of over 250,000 workstations across the globe, including internal networks of more than 100 financial organizations. According to the report mentioned above, Energobank was breached via a spear phishing attack. Some of the employees were imprudent enough to open an email masqueraded as a message from a Russian banking authority. These emails contained malicious code tasked with exploiting security loopholes in Microsoft Office software. As a result, Corkow was instantly executed on the machines and quickly propagated across the bank’s network. This isn’t the only known incident involving the Metel (Corkow) malware. Its circulation was first spotted in 2011, and it had remained mostly dormant until the Energobank story. Group-IB researchers believe this was a “pilot” campaign to check how far the bad guys could go with their Trojan. Members of the Anunak ring have since unleashed Corkow to conduct another defiant heist. In August 2015,the malware attacked the credit card system used by about 250 Russian banks. This compromise made it possible for the hackers to steal hundreds of millions of rubles during just one night. The perpetrators withdrew money from ATMs and rolled back these transactions so that repeated cash-outs could be done in other banks’ ATMs. At this point, no instances of bank fraud using Corkow Trojan have been detected outside Russia. That being said, security professionals claim it may pose a risk to financial organizations elsewhere around the globe. About David Balaban David Balaban is a computer security researcher with over 10 years of experience in malware analysis and antivirus software evaluation. David runs the project which presents expert opinions on the contemporary information security matters, including social engineering, penetration testing, threat intelligence, online privacy and white hat hacking. As part of his work at Privacy-PC, Mr. Balaban has interviewed such securitycelebrities as Dave Kennedy, Jay Jacobs and Robert David Steele to get firsthand perspectives on hot InfoSec issues. David has a strong malware troubleshooting background, with the recent focus on ransomware countermeasures. Comments are closed
March 16, 2016 Phishing is an increasingly devious, almost artistic, threat. The ultimate goal is to trick a target into either downloading malware or disclosing personal or corporate information through social engineering, email spoofing and content spoofing efforts. Having snared an individual, there are a number of ways they can be exploited – from personal identity theft, to large scale corporate breaches. Phishing is thought to have originated around 1995, but it was in 2005 that it become more widely recognised as an attack vector. Ten years later and phishing is still an issue. Phishing Evolution ‘Phishers’ cast their nets wide, playing a statistical game in the certainty that a percentage of people will fall for the scam. As illustration, a 2015 study of 150,000 phishing emails, by Verizon partners, found that 23 percent of recipients open phishing messages, and 11 percent open attachments. In the last decade, phishing education has raised awareness to the risks posed frommessages arriving in mailboxes. As users question the legitimacy of emails, and conversion rates fell, phishers needed ways to hone their messages to increase the probability of success. Unfortunately, in tandem the popularity of social networking sights – such as Facebook, Twitter, LinkedIn, etc. has furnished phishers with a veritable wealth of information that can be used to legitimise their messages. Coined as ‘spear phishing,’ it makes it increasingly difficult to determine fact from fiction. While it might seem all a little one-sided, there have been some wins for enterprise security. For starters, as phishers are playing a numbers game, firewalls and email gateways have become adept at spotting and blocking high volume traffic, meaning many campaigns never arrive in individual’s mailboxes. Another development has been the rise in anti-virus software that monitors and spots the tell-tale signs of messages containing malware, again diverting them away from inboxes. As with any‘profession,’ maximising return on investment is key, so unsurprisingly the scammers are also adapting their techniques, obfuscating their code to evade detection and reducing the volume of messages being sent. One tactic is focusing efforts on the ‘Big Phish’ in the pond – fewer targets, but bigger – in some cases MUCH bigger, returns! Introducing Whaling The term ‘Whaling’ is a play-on-words, reflecting the idea that an important person may also be referred to as a “big fish” or in our case “phish.” While having all the same characteristics of phishing, rather than casting a wide net the scam will target a specific end user – such as a C-level executive, database administrator or celebrity. Corporate websites, LinkedIn profiles, and even an organisations key twitter accounts, all openly promote the identities of the high level individuals, thus divulging the key characteristics Whalers need to ply their trade. As with any phishing endeavour, the goal of whaling is to trick the targetinto disclosing personal or corporate information through social engineering, email spoofing and content spoofing efforts. One example of a whaling attack (also referred to as CEO Fraud) that has yielded results is a ‘wire transfer’ scam. The victim, who is normally a high level executive, receives a spoofed message from a hacker posing as the CFO, or even CEO of a partner company, requesting a money transfer be placed for a vendor payment or company acquisition. Of course, instead of this money being applied to the vendor or merger in question, it instead is applied to a remote account the hacker controls. These messages can be innocuous at first, with the hacker (disguised as an executive or internal employee) asking the victims if they are at their desks. To pull this off, the hacker sends the emails using a display address of the company’s domain, but uses a reply-to address of an external domain, often a free email service. Using this method, the victims can often end upconversing with the hacker via email without realising they are being duped. This method has been used to steal thousands of dollars from companies in fraudulent transfers, often with the requests in the $20-50K range. While that is quite a bitter pill to swallow, many attempts are for much higher amounts and can lead to financial ruin for some companies. A network hardware company called Ubiquiti was victim to one of these schemes in mid-2015, except instead of wiring tens of thousands of dollars, they were defrauded to the sum of $40M. They were able to recover a few million, but it is likely that the majority of the cash will never be back in their hands. At the beginning of 2016 Belgian Bank Crelan, Crédit Agricole’s Belgian subsidiary, announced that it had fallen victim of Whaling attack and had lost over €70 million ($75.8 million) in the process. The FBI is on record as saying that companies around the world lost around $1.2 billion / €1.07 billion in the previous two years towhaling attacks. Many companies spend much time and money on protecting their network traffic or public facing servers from hacks, which is extremely important. But these social engineering spear phishing attempts are why it is equally paramount to protect employee communications as well. Don’t take the bait While firewalls and anti-virus continue to have a part to play in defending an organisation against attacks, the scammers are becoming increasingly canny in the type of campaign devised and the method in which they execute the scam. To avoid the bait, organisations need to be equally devious. Here’s some tips to avoid the Phisher’s net, and the Whaler’s snare: As an organisation, consider a different configuration for high level executive email accounts. For example if, as an organisation, email addresses are typically lastname@domain.com, instead use lastname.firstname@ or even firstinitial.surname@, better still a pseudonym that only trusted personnel will recognise – anythingthat makes it impossible for phishers to spoof Initiate a process that must be followed when an unusual request is made – picking up the phone and verifying the request may have prevented some of the wire fraud seen in the last few years Consider having a ‘secret phrase’ that top-level executives use when communicating to each other so that messages can be legitimised easily A policy that all messages are encrypted – while this wouldn’t stop a scammer sending a message and it being received, the fact its not encrypted should ring alarm bells Mitigating the risk through the use of a reliable e-mail and Web filtering solutions are essential. While identifying the Whaler Net is tricky, it’s not impossible and much of the user guidelines still apply. If its sounds too good to be true, or just barmy, then don’t do it – challenge it! About Fred Touchette Fred Touchette joined AppRiver in February 2007 as a Senior Security Analyst.  Touchette is primarily responsible for evaluating securitycontrols and identifying potential risks.  He provides advice, research support, project management services, and information security expertise to assist in designing security solutions for new and existing applications.  During his tenure at AppRiver, Touchette has been instrumental in accessing critical IT threats and implementing safeguard strategies and recommendations. Touchette holds many technical certifications, including CCNA, COMP-TIA Security+, GPEN –GIAC Network Penetration Tester and GREM - GIAC Reverse Engineering Malware through the SANS initiative.  He is highly regarded as an expert on email and Internet-based cyberthreats, and has been referenced in several top technology publications including USA Today, Forbes.com, Dark Reading and more. Comments are closed
March 16, 2016 Garter’s  for Web Application Firewalls (WAF) estimates that the global WAF market size is as big as $420 million, with 24 percent annual growth, making a Web Application Firewall one of the most popular preventive and/or detective security controls currently being used for web applications. PCI DSS 3.1  suggests WAF deployment as an alternative to vulnerability scanning while ISACA’s “” includes WAF in the 10 key security controls that companies need to consider as they embrace DevOps to achieve reduced costs and increased agility. Nowadays, a number of large and midsize companies offer various WAF solutions, usually packaged together with DDoS protection, CDN, ADC and other related offerings. Amazon Web Services (AWS) has itself recently launched its own WAF service. Gartner predicts that by 2020, more than 60 percent of public web applications will be protected by a WAF. However, in 2015 Gartner had only one vendor listed in its WAF MQ as a Leader (Imperva), and onlytwo vendors listed as Visionaries (DenyAll and Positive Technologies). All other vendors are either Niche Players or Challengers. Many more WAF vendors were simply not present in the MQ for not meeting the inclusion criteria. Last year, security researcher Mazin Ahmed published a  to demonstrate that XSS protection from almost all popular WAF vendors can be bypassed. XSSPosed (the  project) prior to announcing its private and open Bug Bounty programs, published new XSS vulnerabilities on the largest websites (including Amazon) almost every day and was effectively an insightful resource for observing just how security researchers bypassed almost every WAF mentioned in the Magic Quadrant. The emerging trend of RASP (Runtime Application Self Protection) can also be bypassed using similar techniques as for WAF bypass. High-Tech Bridge recently published  which demonstrated that a WAF can be used to mitigate even such complicated vulnerabilities as Improper Access Control or SessionFixation. Sadly, many commercial vendors do not provide even a half of ModSecurity’s technical ability and flexibility for virtual patching. However, High-Tech Bridge’s research also highlighted that ModSecurity OWASP CRS can be bypassed in default configuration, and that creation of custom rulesets may be very complicated and time-consuming. There are five main reasons why WAF protection often fails these days: 1. Negligent deployment, lack of skills and different risk mitigation priorities Many companies simply don’t have competent technical personnel to maintain and support WAF configuration on a daily basis. Not surprisingly, they just put their WAF into detection mode (without blocking anything) and don’t even care about reading the logs. 2. Deployment only for compliance purposes Midsize and small companies frequently install WAFs just to satisfy a compliance requirement. They don’t really care about practical security, and obviously won’t care about maintaining their WAF.3. Complicated diversity of constantly evolving web applications Today almost every company uses in-house or customized web applications, developed in different programming languages, frameworks and platforms. It’s still common to see CGI scripts from the 90s in pair with complex AJAX web applications using third-party APIs and web services in the cloud. Moreover, web developers need to update their web applications almost every day to meet business requirements. Obviously, such a dynamic and diverse environment can hardly be protected even by the best WAF and the most competent engineers. 4. Business priorities domination over cybersecurity It’s almost unavoidable that your WAF will cause some false-positives by blocking legitimate website visitors. Usually, after the first complaint to the management from an unhappy customer who could not pay for the service and left for a competitor, WAF is being definitely moved into detection-only mode (at least until the next QSA audit).5. Inability to protect against advanced web attacks By design, a WAF cannot mitigate unknown application logic vulnerabilities, or vulnerabilities that require a thorough understanding of application’s business logic. Few innovators try to use an incremental ruleset hardening in pair with IP reputation, machine learning and behavioural white-listing to defend against such vulnerabilities. However, they need to pass complicated learning cycles that take quite a lot of time, and are not yet reliable enough. A Web Application Firewall remains a pretty complicated security control to deploy and maintain within an organization. However, a WAF remains probably the only preventive security control for web applications, significantly reducing the risks of web vulnerabilities exploitation. A properly configured WAF can prevent simple vectors of the most common web vulnerabilities (such as XSS and SQL injections), even in very dynamic and complicated environments. Moreover, if for a reason it’simpossible to patch the vulnerable web application source code or apply vendor’s patch, virtual patching via WAF can be a life-saver.  Nevertheless, in no case should a WAF be considered a panacea against web attacks, and shall always be completed by other security controls, such as Vulnerability Scanning, Developer Security Training and Continuous Monitoring, as suggested by ISACA. Yan Borboën, partner at PwC Switzerland, MSc, CISA, CRISC, comments: “As of today, we can say that cyberattacks have become the new normality in our today’s digitally connected world. There is no ‘magic bullet’ for effective cybersecurity, it’s a journey which is starting with the identification of your key risks and your crown jewels (i.e. client data, intellectual property, etc) and then to find the right mix between technologies, processes, and people measures.” Being insufficient to properly mitigate complicated security flaws in modern web applications, a Web Application Firewall still remains anecessary security control within organizations. About Ilia Ilia Kolochenko Kolochenko is the Foundation of web security company High-Tech Bridge and the chief Architect of ImmuniWeb® # platform. Ilia previously worked as a Penetration Tester, IT security expert and manager for various financial institutions in Switzerland and Central Europe. Ilia holds a bachelor degree with honors in Mathematics and Computer Science. Ilia also has a military background from Swiss artillery troops where he served prescription to Creating High-Tech Bridge Comments are closed
March 15, 2016 It doesn’t matter what industry you are in: passwords are going to be a major player in daily lives no matter where you are.  Despite the famous 2004 prediction that the password is dead, it’s still kicking around today – along with an entire list of requirements and password policies in place to make it as secure as possible for any given environment.  Interestingly enough, recent studies have shown that some of those policies – namely mandatory password changes – may not be all that we had originally thought them to be. Lorrie Faith Cranor, Chief Technologist at the Federal Trade Commission and Comp-Sci professor at Carnegie Mellon University, recently published noting that mandatory password changes may not be as effective as IT professionals think, and actually serve as little more than a minor hurtle to a typical modern day attacker. Usability is King Cranor cites two detailed research studies, as well as evidence put together through her own research at CarnegieMellon, which supports the claim that mandatory password changes put a harmful strain on the end-users in an environment that can ultimately make their accounts less secure. We’ve all been privy to the pains of mandatory password resets – on top of the literal dozens of passwords that we have to remember and use each day, we are then expected to come up with something strong and secure all over again.  It can be a nightmare, honestly.  In those situations, it is not unheard of to fall into the habit of setting a usable password in favor of a more highly secure one – and therein lies the issue: end-users are more inclined to take whichever path is more convenient at the risk of sacrificing security. In her case-study, Cranor cites research to support this claim, noting that, “…we found that CMU students, faculty and staff who reported annoyance with the CMU password policy ended up choosing weaker passwords than those who did not report annoyance.” In cases where accounts are truly atrisk, this practice serves to negate many of the security polices put in place – even if the password has to be changed frequently.  It also serves as an interesting point in support of the fact that much end-user behavior is at least partially dependent on levels of frustration (referred to as annoyance). As it happens, people are predictable.  When forced to change passwords on a regular basis, not only do end-users tend more towards setting weaker passwords, their password changes are more likely to follow a predictable transformation.  UNC researchers found that once one password was cracked for a specific user, attackers can . If we acknowledge that password fatigue and frustration is one of the root causes of this human error in judgment, resolutions can be readily implemented to overcome such potentially disastrous end-user behavior. So What’s the Verdict? This research on mandatory password changes has made one thing very clear: end-users seek out convenience and usabilitywhenever they can, often with no regard to the potential fallout.  With the increasing number of passwords required for daily access, adhering to a stringent policy for password changes has made end-users react in a way that is more manageable yet less secure – which can put an entire network at risk. In order to provide a secure alternative, solutions like password managers or even should be provided to end-users where available.  Single Sign-on makes use of industry standard protocols (SAML, CAS, Shibboleth, Kerberos, etc.) in order to eliminate the need for users to enter multiple passwords or even respond to multiple login prompts.  Additionally, an appropriate, fully integrated SSO solution can eliminate password fatigue and encourage end-users to create strong, complex passwords that are simple to manage and even recover when forgotten. Of course, as Cranor noted in an , “You never have to explain why you’re making things more secure…removing that requirement would require a lotof explanation.” It’s like a coworker of mine frequently says, ‘Nobody ever got fired for buying IBM.,’ but in reality, we need to be able to adapt to the evolving nature of digital security – even if that means upending some previously established standards.  More and more evidence is coming to light in regard to the need for mandatory password changes, and it seems that now is a good a time as any to take a good look at existing authentication security and see what can be done to increase security in a way that end-users will be able to manage. Things are changing in the world of cyber security – if we are to keep from being left in the dust, our best practices need to keep changing too. About Christopher R. Perry Christopher is the Content Manager and Editor with PistolStar, Inc., an authentication solution company that addresses various pain points and identity management concerns with their fully customizable solution. Christopher has an M.A. in English from SUNY Albany, and hasheld various IT Positions that include Tier 1 Technical Support and hardware setup as well as custom P.C. construction – giving him a unique, supportive perspective from which to write.
March 14, 2016 Data breaches are expensive. Gross costs stemming from Target’s infamous 2013 breach totaled $252 million. And the Ponemon Institute’s annual survey saw the cost for each compromised record had risen for the eighth consecutive year to approximately $150. Coupled with the number of data breaches reaching an all-time high in 2014 (a short-lived record likely to be beaten in 2015), it’s no surprise that cyberinsurance is in high demand. However, cyberinsurance should be viewed only as a safety net to protect financial interest, and not the foundation of a cybersecurity architecture. Interest in cyberinsurance has risen alongside the increase in serious data breaches as a means for companies to recoup a portion of the financial losses they sustain when sensitive data is stolen or otherwise exposed. Target recovered $90 million of its $250 million loss thanks to insurance, so there’s a very obvious benefit to having it. But at a recent conference for CISOs where experts puttheir heads together to address some of their common problems, I was surprised by how many executives were hedging their company’s data loss bets with cyberinsurance policies. A changing landscape While certainly helpful, cyberinsurance isn’t the panacea CISOs might be hoping for. Data breaches have reached near-daily frequency, and the costs continue to climb. As such, cyberinsurance premiums are going up – sometimes by more than 30% – as are the policy conditions and exclusions. Insurers are also raising deductibles and setting limits on coverage. This has impacted more severely, due in large part to the number of recent costly breaches in those business sectors. Other factors also affect the cost of cyberinsurance, such as the mandated requirements for breach disclosure and notifications, which varies by industry. This can significantly run up the costs of a data breach well into the tens or hundreds of millions of dollars, driving some insurers to cap coverage at $100 million forrisky customers. Thus, insurance payouts may only cover a portion of the costs, which typically include: Breach notifications to affected customers Voluntary or mandatory credit monitoring services PR and communications services Forensic investigations Lawsuits IT remediation Fines and other penalties Brand and reputation damage Loss of business Loss in market capitalization The long-term repercussions Beyond the cost of the data lost, there are other factors to consider, such as damage to brand reputation and loss of customer trust, which can last for years and are much harder to quantify. And the general public isn’t going to care that the business saved money when their personal data was compromised. They’re going to want to know how it happened, when it happened, and what the company is going to do to prevent it from happening again. If customers don’t feel secure doing business, they’ll go elsewhere. Having cyberinsurance won’t change that, nor will it save a CISO’s job should adata breach occur. This is not to say that cyber liabilitity insurance doesn’t have a place in the corporate quiver; it does. However, a legal hedge against a data breach is not the best way to go as it’s a reactive, not proactive, strategy. Cyberinsurance should only be viewed as one component in a more comprehensive cybersecurity strategy to protect the organization against a breach. Companies still need to build a proper defense to prevent a data breach from happening in the first place – or at least minimize its effects. This is best accomplished by following cybersecurity best practices, such as identifying the critical data assets, restricting or limiting access to them, applying a layered defense approach, monitoring the data assets for unapproved access or activity, and responding promptly to any suspicious activity. No insurance policy in the world is that multi-talented. About Daren Glenister Daren Glenister is the Field CTO for (NYSE: IL), a leading global SaaS provider ofcontent management and collaboration solutions. In his role, he acts as a customer advocate, working with enterprise organizations to evangelize data collaboration solutions and translate customer business challenges into product requirements, helping to steer Intralinks’ product roadmap and the evolving secure collaboration market. Glenister brings over 20 years of industry experience and leadership in security, compliance, secure collaboration and enterprise software having worked with many of the Fortune 1000 companies helping to turn business challenges into real world solutions. In the past, he has led technical and consulting businesses for CA Technologies, Symantec (Bindview), BMC Software Intellinet and Sterling Software. Follow him on Twitter: @DarenGlenister.
March 14, 2016 Advertisements and marketing are inseparable concepts. It’s embedded e-commerce content that allows various online services to exist without charging their customers a penny. There are unspoken guidelines that the interested parties follow along the way, such as avoiding the redundancy of ads and only promoting commodities that are safe. Ideally, these campaigns aren’t overly intrusive, both the service providers and the end users are satisfied. This remarkable equilibrium, however, is amazingly easy to disrupt. Malicious programs categorized as adware drastically diminish one’s online experience by injecting obnoxious ads into all websites that the person visits. Note the fundamental difference between regular advertisements and the ones spawned by adware. The former are authorized and generated on the server side while the latter are isolated strictly to a particular computer. Since the evil counterparts aren’t bound by regulations of any sort, they tend to getsuperfluous and may even cram up the greater part of an arbitrary web page. Virus-borne items include ads above the fold, coupons, banners, price comparison charts, bogus software updates, inline text and full-page interstitials. Such a diversity enables the cyber criminals to get the biggest bang from their ad click fraud campaigns, but the infected users suffer the consequences big time. Although adware removal may be a challenge to perform, below are the techniques worth adopting to get rid of nasty ads on sites. Windows uninstall functionality should be the starting point. This feature is built into the operating system and allows removal of any installed program in a couple of clicks. All it takes is go to Control Panel from Windows Start menu, select Uninstall a Program, examine the software list, pick the malicious entry and hit Uninstall. Some malware, though, obfuscates its presence on a PC and may not be listed, in which case it’s recommended to proceed to the next step.Manual removal from web browsers is very efficient when it comes to adware troubleshooting. Since it’s the web browsing facet that gets hit by these infections in the first place, spotting and trashing the offending browser add-on is one of the prerequisites of a successful cleanup. Nevertheless, adware can add a scheduled task to reanimate the extension after such action on the user’s end. A full reset of the affected browser’s configuration is, more efficient, moreover, it remediates the unwanted changes. In Google Chrome, this option is under Advanced Settings; in Mozilla Firefox, you need to go to Help – Troubleshooting Information; and in Internet Explorer, it’s under the Advanced tab of the Internet Options interface. Please be advised all personalized browsing data will be obliterated as a result of this procedure. Registry troubleshooting may be necessary because adware usually creates new registry entries to persevere on the PC. This way, its executable is automaticallylaunched as part of the system startup routine. To access the registry, type ‘regedit’ in the Start menu’s Search box, select the respective command and hit Enter. Then go to Edit and pick the Find option. In the box named ‘Find what’, type the name of the adware and press Enter. To figure out the name, take a look at the ads that are causing issues – there is typically an inscription down at the bottom, for instance ‘Ads by Shopperz’ or alike. If the registry search returns something for that query, do not hesitate to delete those entries. Temp folder cleanup is another recommendation that’s worthwhile. Having attacked a computer, PUAs (potentially unwanted programs) tend to download auxiliary components to the Temp directory, which is located on the system volume under AppData – Local. An easy way to access that folder is by typing %temp% in the Search box. Deleting all entries there is safe. File traces of the infection will thus be removed as well. ‘Show hidden files’ is amust-enable option. Some adware strains try to thwart removal by hiding their folder. Most of the time, the obfuscated malicious objects lurk inside Program Files or AppData directory. To view and delete those, go to Control Panel, select Appearance and Personalization, and choose Folder Options. Proceed to the View tab, scroll down to Advanced settings, pick the ‘Show hidden files, folders and drives’ option and save the changes. Take a look at the contents of the above-mentioned folders, locate suspicious entries that were recently added, and remove the ones that are related to the adware program. Automatic removal of remaining adware traces is strongly advised. No matter how thorough you believe the manual cleaning was, the infection’s fragments are still likely to be scattered across the system. Be sure to use a reliable security suite that proved to be efficient in adware scenarios, such as free Malwarebytes Anti-Malware or AdwCleaner. Run a full scan and get all detectedartifacts removed. Last but certainly not least, a few simple prevention techniques can keep ad-injecting viruses away. First off, treat freeware installations with caution. Most of the known adware samples are distributed through bundling schemes, where a harmless free product and unwanted items go in one package. The presence of dangerous extras is typically mentioned in fine print during the setup, which is why users overlook them. Technically, this is a legal spreading method, but its ethical facet is questionable. Be careful when using Torrent Trackers. The tactic dubbed Torrent poisoning can be leveraged to distribute malicious code via the P2P protocol. It is currently a growing attack vector. Also, do not install anything recommended by nagging popup alerts on websites, whether it’s a Flash Player update or the “best” movie downloader. All in all, just be prudent when online and steer clear of stuff that looks fishy. About David Balaban David Balaban is a computer securityresearcher with over 10 years of experience in malware analysis and antivirus software evaluation. David runs the project which presents expert opinions on the contemporary information security matters, including social engineering, penetration testing, threat intelligence, online privacy and white hat hacking. As part of his work at Privacy-PC, Mr. Balaban has interviewed such security celebrities as Dave Kennedy, Jay Jacobs and Robert David Steele to get firsthand perspectives on hot InfoSec issues. David has a strong malware troubleshooting background, with the recent focus on ransomware countermeasures.
March 11, 2016 The planets are aligning against the privacy of every individual who uses a healthcare system; those planets being complexity and new technologies. Modern medicine has to deal with massive numbers of patients and the routes taken by patient data are often highly convoluted, complex and open to error. As the system currently stands, patient information is shared between what amounts to, a small eco-system of associated actors. These include: employers, lawyers, insurance companies, general practitioners, pharmacies and hospitals. The image below shows some work carried out to quantify the complexity of the data sharing eco-system – this shows the pathway of data when a simple blood test was ordered by a general practitioner. This study, , was carried out back in 2006 by Enrico Coiera and since then the complexity has increased as new technologies such as Cloud systems and mobile devices have entered the arena. The types of data flowing through the healthcare eco-systemare also highly varied. Often the data capture mechanisms used varies across the system and results in data that is difficult to aggregate and analyze. This non-standardization is compounded by the era of big data. Healthcare data is now, on the whole, digitized and the volumes of digitized data are massive. This has both positive and negative connotations for the healthcare industry. On the plus side, it is expected the use of big data can save the industry billions. predicting a $100 billion increases in annual profits with the use of big data. On a more negative note, the complexity of the healthcare data eco-system may well be one of the reasons healthcare is a prime target for cyber-crime. In 2014 one of the biggest security breaches ever, involving personally identifying information (Pii) occurred against healthcare insurer Anthem. This breach resulted in the theft of almost 80 million records containing personal details, including social security numbers. In addition,cyber-crime against healthcare providers is not surprising when you consider that a healthcare record is worth more than any other data record on the black market, figures from the setting the price of the average stolen healthcare record at $363. But it’s not just the big breaches that are a worry for patient data privacy; even small breaches can result in loss of privacy. The HIPPA Breach Notification Rule requires that any healthcare industry member has to reveal a breach that affects more than 500 individuals. The resultant notification list can be seen on the website of the . If you generate a report for January 1st 2015 to 22nd September 2015, it pulls up 190 incidents ranging from laptop thefts, to unauthorized access of electronic healthcare records and spans the range of the extended family of healthcare provision. HIPAA should never be used as a coverall for privacy protection. HIPPA is a set of guidelines for security best practice. Healthcare privacy is a much more diffuseconcept that cannot be simply achieved by applying encryption to a database, as exemplified by one of the well published Target privacy breaches, where the company sent out baby coupons to a teenage girl, identifying her, to her parents, as being pregnant. Making a complex system even more so New technologies, which are adding new routes of data vulnerability, do bring patient benefit. The use of electronic healthcare records (EHR) within an integrated platform brings greater efficiency, allowing disparate units, such as consultancy, documentation and pharmacy to more easily share information on a given patient. A 2013 study by RAND showed that the USA could save around $78 billion by moving to a fully EHR system. However, the advent of ‘data driven medicine’, which is enabled by the use of EHR and Cloud based platforms, will open up new challenges for data protection and privacy of information. Mobile devices or mHealth, which offer advanced data collection and sharing opportunities,are also becoming ubiquitous in healthcare, with an estimated using a mobile device for work and 50% of those using an iPad in their practice. The use of mobile devices to generate and share data is not, of course, confined to the professional. Patients are starting to use mobile apps. A report by mobile analyst, in June 2014, saw a 62% increase in the use of health apps by the public and there is a move for the data generated using these apps to be shared with doctors, so much so, that the FDA are currently exploring how to regulate these apps. Then there is the advent of the Internet of Things (IoT). The benefits of IoT in healthcare can be substantial as research identified in a report by MacAfee on , shows the use of IoT in healthcare provides a saving of $63 billion in the next 15 years. However, as an extended family of Internet connected devices enter the patient data eco-system, we will see even more complexity and more pathway extensions that open up areas where privacy andsecurity are at risk. The same report also stated that privacy violations are one of the expected downsides of the use of IoT in healthcare and that the use of encrypted data transmissions between devices is crucial to remediate this issue. Where do we go from here? Efficient data sharing is a vital part of modern medicine. Add to this the need to share these data across different device types, often using Cloud technologies, within a context of an increasingly sophisticated cybercrime landscape and you create a can of worms as far as ensuring that patient data privacy is upheld. Organizations such as the provide standards and certifications that provide a for health record privacy, particularly EHR. They have embedded the HIPPA privacy and security requirements into the U.S. Medicare and Mediaid EHR incentive programs, requiring providers to reach certain levels of attainment in the use of EHR’s. The (CDT) in partnership with the California Healthcare Foundation, have developed a setof privacy principles in healthcare use of data that cover off the main areas of consent, notice, security and choice. The bottom line outcome of the review is that patients should have more choice in how their information is collected and used; the fundamental principle being that patients have rights to their own data. The CDT recognizes that patient data is needed for research, for example, but it should be used in an environment of transparency and user choice. The CDT is currently running a series of consultative workshops with stakeholders looking at the impact of big data on patient privacy and how to resolve these issues. One of the areas they wish to focus in on, is how to interpret the Fair Information Practice Principles or FIPPS-based HIPPA rules. The outcome they are hoping for is to create privacy principles that will encompass both traditional and emerging healthcare applications. But principles and guidelines are not enough; you need technical innovation that can applythese principles. There are a number of groups working in the technology area of healthcare data sharing, including the Kantara Initiative. Here a working group, known as , is working on an open standard Internet protocol that will allow users to manage their consent to share data within a healthcare context. It is the use of technologies like the UMA protocol that will enable the use of wide scale EHR platforms with an extended IoT/mHealth framework, to be utilized in a more transparent, consented and privacy enhanced manner. About Avani Desai Avani Desai is a Principal and Executive Vice President at , with over 13 years of technology and privacy experience. About Jeanmarie Loria Jeanmarie Loria is the Managing Director of , where provides quality consulting and project management services to payors and providers to increase clients’ ROI and satisfaction.
March 11, 2016 Ordinarily, falling victim to a ransom plot means that you are the son or daughter of some wealthy person and the only way to get out of it is by paying tons of money or waiting for Arnold Schwarzenegger or Kurt Russell to come and rescue you, or, at least, that’s what TV would have us believe. These days being held for ransom can actually happen quite differently with your computer of all things. I’m talking of course about ransomware, a particularly diabolical type of , that is to say, bad software, that’s been making headlines recently. Here’s how it works. Once ransomware gets on your computer, usually through an affected email attachment or the all too common Trojan horse attack, it will lock your computer or your data in some way and demand payment in exchange for giving control of your system back to you. Some of the simpler forms of ransomware will simply try to fool you into thinking there’s something wrong with your computer and get you to pay money to fix it.A common tactic that we see in those banner ads that tell you that you’ve been inexplicably infected by something. Now often times with those, you’ve got at least rudimentary control over your system still, so the only real issue is that you have to deal with constant pop-ups until you find a way to get rid of the malware. A much more irritating kind of ransomware will lock your computer entirely and keep you from logging into your operating system unless you pay the money. Many of these varieties of ransomware will display a threatening message purporting to be from the FBI or some other super hardcore police agency, saying that your computer was used for something highly illegal, but you can get your computer back and avoid doing a hard time, just by paying a few hundred dollars. Sounds absurd, right? But people have fallen victims to this and even if you recognize the scam immediately, it can be a real pain to remove. Worst of all is the ransomware that not only locks your systembut also encrypts your files and won’t provide you with the keys to decrypt them unless you pay up. The most notable of these being Cryptolocker, although many other variants have popped up since that one first made the news back in 2013. There are other issues with this type of ransomware, unsurprisingly. Cyber-criminals aren’t the most trustworthy folks, and many people have reported not getting their files back even after paying the ransom. On top of that, some kinds of ransomware don’t even ask permission, they just hit your Bitcoin wallet and take the money without even giving you a chance to say: “Well, hold on, let me think about whether this data is worth paying for.” So, how can you rescue your computer and protect your cash if you get infected? Many of the non-encrypting types of can be removed by booting into safe mode and running an up-to-date anti-malware tool. Or, if that fails, downloading a bootable removal tool to a flash drive and running that. However, if you’ve beenhit by a crypto ransomware, you’re probably out of luck. As most of these use a very strong algorithm. In fact, the FBI has advised people to just pay these ransoms in the past. If you don’t like the idea of your money going to online criminals, back up your data somewhere, preferably offline. Remember please, to explain to your grandparents what a banner ad is if they call you in a panic over having fifty viruses on their all-in-one PC. About David Balaban David Balaban is a computer security researcher with over 10 years of experience in malware analysis and antivirus software evaluation. David runs the project which presents expert opinions on the contemporary information security matters, including social engineering, penetration testing, threat intelligence, online privacy and white hat hacking. As part of his work at Privacy-PC, Mr. Balaban has interviewed such security celebrities as Dave Kennedy, Jay Jacobs and Robert David Steele to get firsthand perspectives on hot InfoSecissues. David has a strong malware troubleshooting background, with the recent focus on ransomware countermeasures.
March 11, 2016 Many of the modern gifts are digital products like notebooks, ultrabooks, tablets, smartphones. How do we protect our children when they go online? According to the latest statistics, our children are spending over 9 hours a day connected. Think how scary that statistic is, 9 hours! You are probably asking yourself what are they doing all that time, and how much information are we really giving them in order to understand how to use the Internet securely? Staying on top of dangerous apps that your kids shouldn’t be downloading is crucial. And now there’s a way that kids can hide those bad apps right in plain sight.  You have to know what to be cautious about. There are many perilous things out there. For example, there is an app called YouNow. Hugely popular with young people, parents should really be aware of. YouNow is a streaming application that’s most certainly on your kids’ devices. With hashtags like ‘bored’ or ‘dancing.’ There are millions of children actuallychilling out and streaming. One more hashtag: ‘sleeping squad’ features users while sleeping. In most case, young people are live streaming as they are sitting at home in their rooms. They are basically talking to unknown people who are writing messages back. Sounds innocent, but can get not so innocent fast. Kids tend to do kind of shocking things to earn likes. They are also sharing a lot of personal information. It’s just one app parents should keep on their radar. Some other apps parents should keep tabs on are texting apps like ooVoo, , and Kik. Other include the self-deleting apps like Snap Chat, Burn Note, and Yik Yak. Finally, the dating apps like Tinder, MeetMe, and Scout. You also want to pay attention to apps that will hide certain apps on your phone. For example, Vaulty, it allows users to generate a password-protected repository where you could hide videos and pictures. In addition, Vaulty may take a photo of anyone who attempts to enter the vault but puts in the incorrectpassword. Hide it Pro, similar to Vaulty, it enables you to conceal files. Hide it Pro itself is masked to look like a media manager. The app shows a lock protected folder where users can conceal videos, messages and also other applications. So, what can we do to keep our kids protected when they are playing with their gadgets? Listed below are the most important steps you have to be taking be certain that your sons or daughters are secured. The first one: location, location, location. Wherever your kids are using a computer, make sure it’s in a public area of the household, that way you can be casual and just sneak on by and look at what they happen to be watching and typing. Mothers can be casually cooking or whatever it is and see that children are doing something that is safe and secure. According to the latest surveys, over 17% of parents had seen their kids doing things online that were completely inappropriate. 60% of parents said they didn’t really know what their kids weredoing, and that’s totally scary. Next tip, stay on top of social. Get all of your kid’s usernames and passwords. If they want accounts, they need to share with you their accounts’ usernames and passwords. Not only that, friend them, follow them and see what’s going on. Some children may write bad comments or post inappropriate photos. As a parent, knowing where to look and be aware of this allows addressing such situations. Next tip stresses: “Share with one, share with all.”  You have to educate your children about their digital footprint; they need to understand that what they’re putting online, whether their profile is locked for public or not, it is going to stay there for good. Parents should be concerned about their kids over-sharing about their family and themselves. Take action. This is critical. Use parental control software. There are various free and paid versions. Parental control software monitors your child’s location, allows you to limit how much time they spend online,allows you to limit the websites they are going to. And if they want to go to a specific website, they can send you a special message requesting it. You can send them messages that take over their apps or the device itself and doesn’t allow them to use it until they respond to your message. Parental control software lets you whitelist the apps your kids wishes to install. In fact, you should become an for all those devices. Putting limitations on what children can do is absolutely adequate in present day tech world. Finally, teach critical thinking and reward it once these kids learn. Set some limitations when you give them a device.  Don’t just give it but teach first how to use properly. About David Balaban David Balaban is a computer security researcher with over 10 years of experience in malware analysis and antivirus software evaluation. David runs the project which presents expert opinions on the contemporary information security matters, including social engineering, penetrationtesting, threat intelligence, online privacy and white hat hacking. As part of his work at Privacy-PC, Mr. Balaban has interviewed such security celebrities as Dave Kennedy, Jay Jacobs and Robert David Steele to get firsthand perspectives on hot InfoSec issues. David has a strong malware troubleshooting background, with the recent focus on ransomware countermeasures.
March 10, 2016 NNT review and discuss the range of Cyber Security Threats predicted by analysts and vendors and present a Top Ten of Cyber Security Safety Measures. Drinking kale and beetroot smoothies isn’t one of them, but to find out why not, and to see what did make the list, read on… “To begin with we consulted a number of expert sources. As with many of these prescient type reports, conjecture and guesswork certainly play their part. That said there is enough fact based on current trends and previously observed activity to take all this very seriously indeed.” What Does Experian Think? Chip & Pin won’t stop payment card breaches (only 53% of IT Security Executives believe EMV cards will decrease the risk of a breach) Whilst we may have expected there to be some pessimism to the claims that Chip & Pin would represent an end to Credit Card theft. Interestingly 47% predict no discernable improvement at all– never mind any sort of total prevention Attacks on Healthcare Institutionswill increase (Healthcare Records worth 10 times that of CC details) Healthcare records are worth 10 times more than that of credit card data. Healthcare providers have notoriously poor defenses – FBI warnings following a bout of breaches including one leading provider who had 4.5 million records compromised Healthcare records are being used to fabricate insurance claims, purchase drugs and generate fake ID’s. The lack of prevailing security and the rich source of personal data available, makes this a very attractive target for cyber criminals Cyber conflicts between Enemy Nations will increasingly affect civilians between Enemy Nations may include public facilities such as Airports, Hospitals and Government Facilities Perhaps this should come as no surprise as we have already seen examples of this right back to Stuxnet (originally designed to attack Iranian Nuclear Facilities) and the more recent disabling of Ukrainian Cel Networks by Russian intelligence Hacktivism will make a comeback Hacktivism- both corporate shaming and ‘Cause-Based’ will increase – considered the ultimate leveler From Ashley Madison to threats on ISIS. The apparent success of some of these initiatives is fueling a renewed vigor for those purporting to represent a cause – however justified What Does Trend Micro Think? 2016 will see an increase in online extortion We’ve already seen examples such as the LA Presbyterian Med Center settlement. The fact that this was a relatively quick and easy ‘Hack for Cash’ is driving another predicted trend, which we will touch on later. The LA Hack speaks to both the targeting of Healthcare as well as the increase in Ransomware At least one consumer grade smart device will cause fatalities From Drones circling our no fly zones to Medical Smart Devices used to transmit emergency care information. All of these are targets and all occupy worryingly close links to human lives China will drive mobile malware growth to 20M by the end of 2016 Growth in MobileMalware is already accelerating way faster than traditional computer based Malware. Since we started tracking PC Based malware in 1984 it took 20 years to grow to 20m instances. In contrast, we have seen Mobile Malware grow to these levels within 6 years (Source: Trend Micro) Hacktivism will increase Trend agrees with Experian! Little or no change in priority or investment at a corporate level Despite all if this, less than 50% of organizations will have dedicated IT protection specialists Cybercrime legislation will become a Global Movement United Nations will inevitably combine forces to improve both Cyber Protection as well as their ability to fight back What Does Gartner Think? The attack surface is changing all the time Contemporary threat environment is broadening with the advance of Shadow and Bimodal IT Means of enabling IT is changing. The Marketing Department may well have their own IT assets beyond the IT teams reach Mapping visibility The better you understand what you havethe better able to protect and monitor it you will be Don’t focus too much on Zero Day Threats! 99% of exploits are based on vulnerabilities known for at least a year, and this trend will continue through 2020! Last year’s most prevalent malware ‘Conficker’ based on a 7 year old vulnerability within windows Emphasis should be more on prevention than detection Focus on the fundamentals of cyber protection rather than investing in emerging technologies Known vulnerabilities will be sold on the black market more Where new vulnerabilities and new exploit techniques are discovered, the value of these is now better understood with an established market available NNT Summary of 2016 Cyber Security Threat Predictions The field of attack is broadening as new lucrative and disruptive targets are identified, and those with a cause to promote seek to enter the arena Organized crime will join the cyber-crime movement as it ceases to be the sole domain of the specialist hacker. $17k quick and easy‘Hack for Cash’ at LA Presbyterian Medical Center combined with the prevalence of Malware on the Black-market makes cyber-crime suddenly accessible and attractive to common-all-garden crooks Apathy (it won’t happen to us) and cost will remain the two major blocks to Corporate and Government Cyber Security The litigators are circling! The stakes are going to be raised as more lawsuits are brought for damages relating to the loss of personal identifiable information The Typical Mistakes Made by Most IT Teams and Why Corporate Cyber Security fails So we all get sold on the need for Cyber Security defense measures and there is plenty of FUD (fear, uncertainty and doubt) used to amplify the urgency and acuteness of the need. The difficulty when determining the right Cyber Security strategy for your organization and in turn which technologies and products to use is not too dissimilar to assessing the market choices for keeping your body fit and healthy. Many vendors try to say that they candeal with all known threats to the enterprise when actually, just like your personal health, it just isn’t as simple as that. Cyber Security takes many forms and the range and nature of threat is so varied that there just isn’t any getting away from the fact that it will require a multi-faceted solution. But – it’s easy to be tempted by the pitch! A sexy looking security appliance with a slick GUI is very tempting. And if it really can capture and defeat APTs, stop Phishing attacks and malware, block and alert on insider threats, hacktivism and rogue employees, while also protecting your IT from ransomware and government-sponsored/ blue chip espionage, then all your problems would be solved. Likewise, if you really could lose weight, build a six pack and get marathon-beating stamina from drinking a kale and Persian cucumber milkshake, we would all do it. And of course, an anti-oxidant rich cocktail of vitamins and nutrients probably will help in some way, but it isn’t going to geteveryone losing weight and getting fit. In fact, most would give it up and go back to bad habits. Which brings us back to Cyber Security – it’s also a 24/7 discipline and requires a combination of technology measures, procedures and working practices to maintain solid defenses. It’s precisely for this reason that organizations get breached and will continue to get breached unless Cyber Security mind-set becomes second nature for all employees. So, in the meantime, what should you be focusing on? Here’s a quick summary – there are more comprehensive security policies, standards and guidelines out there – see the PCI DSS (Version 3.2 is almost here) or any of the other standards I showed earlier like NERC CIP, NIST 800-53 etc. There are also generic policies, like the SANS Top 20 or the CIS Security Policy that are freely available. Top Ten Cyber Security Tips Mitigate Vulnerabilities Firewall or better, IPS AV EMET AppLocker System Integrity Monitoring Change Control – augmented withThreat Intelligence Promote and enforce an IT Security Policy BitLocker Finally – Don’t be too thrown off course by the latest ‘must-haves’ Top Ten Cyber Security Tip: Mitigate Vulnerabilities Easier said than done and most security policies duck out of providing specific prescriptive guidance, partly because this is a fluid area and the latest intelligence is always needed, but also because vulnerabilities need to be balanced against risk and operational requirements. In other words, most security professionals will tell you to minimize open ports and remove any unnecessary services, in particular FTP and Web Servers, so a typical hardening exercise involves removing these. But if you actually need these for your application then you will need to provide security via other means. The latest Microsoft Security Policy covers literally thousands of settings that control functional operation and in turn security of a host, so deriving the best balanced build standard can be a painstakingtask. The Center for Internet Security Benchmarks provide secure configuration guidance drawn from manufacturers like Microsoft and RedHat, combined with academic and security researcher input. They are available free of charge and provide full details for auditing for and remediating vulnerabilities from a comprehensive range of platforms. This is an area where automated tools are definitely an essential. Firewall or better, IPS AV EMET AppLocker The best understood elements of any Cyber Security kitbag are the firewall and AV. They are fallible as we all know – zero day threats easily evade AV even while the AV is gobbling up system resources and more often than not, getting in the way. Likewise for the firewall or IPS – there are numerous ways to leapfrog the Firewall using phishing attacks, APT technology or just plain old Inside Help. However, as we said earlier, there isn’t going to be a quick fix, single course of action of technology that will keep us secure, and these legacysecurity components still play an essential role. Less well understood are some of the complementary technologies available that can be used to plug further weak spots. The market is awash with good ideas and exciting sounding technology, I would say to look at what is available to you right now, but is probably not being used. Namely EMET and AppLocker – both are Microsoft offerings, free to use, but require a little bit of know-how and experimentation to implement. EMET works to head off a number of malware techniques, especially ‘file-less’ malware that tries to use process hijacking, memory exploits, browser vulnerabilities and man in the middle attacks. AppLocker provides the means to whitelist/blacklist program and dll operation to really lockdown PC and Server operation. There are many commercial offerings covering similar areas of course, but neither of these, nor Windows Defender, should be overlooked. System Integrity Monitoring Change Control – augmented with ThreatIntelligence Three main reasons why change control and system integrity monitoring are vital to maintaining Cyber Security: Firstly, once our Vulnerability Mitigation and secure configuration work has been implemented, we now need that to remain in effect for ever more. So we need a means of assessing when changes are made to systems, and to understand what they are and if they weaken security. Secondly, any change or update could impact functional operation, so it is vital we have visibility of any changes made. And finally, if we can get visibility of changes as they happen – and especially if we have a means of reconciling these with details of known expected planned changes – then we have a highly sensitive breach detection mechanism to spot suspicious action when it happens All leading Cyber Security policies/standards call for change control and system integrity monitoring for all these reasons – it is key. Promote and enforce an IT Security Police Encryption (BitLocker) CyberSecurity isn’t just the responsibility of the IT team and their security kit, but must be an organization-wide competence. Children grow-up being taught about food hygiene, it isn’t just the remit of professional chefs. Unfortunately, it takes generations for this kind of knowledge to become universally assimilated, so until Cyber Security hygiene itself becomes a basic life skill for all, it will be down to the workplace to educate. To this end, in case you don’t already have flyers/posters for Cyber Security education there are plenty of resources available, again the SANS Institute provide a bunch of these that are free to use and very good. Separate but related is the subject of data encryption – it slows everything down and gets in the way on a daily basis BUT it can prove a lifesaver if there is a breach that results in data theft. Loss of a company laptop is a pain, but the loss of confidential data could result in anything from acute embarrassment to fines and lawsuits. Again,plenty of commercial options exists and there is also a free of charge MS option for this too in BitLocker. You can use it to encrypt all drives or just data on local and removable drives. In an enterprise environment this is controlled via Group Policy and as such, can also be audited automatically in the same way that vulnerabilities can be assessed. Used correctly, this same audit report can not only provide the recommended settings to use when first implementing BitLocker, but it will also highlight any drift from your preferred corporate build standard, along with all the other security settings needed to protect systems. Finally – Don’t be too thrown off course by the latest ‘must-haves’ The final piece of advice really is to focus on getting the fundamentals right and not chase the latest, niche or point products. If the maxim of ‘there is no such thing as 100%’ security is accepted then how are you going to achieve Cyber Security? The only answer is that it will need to bemanaged as a layered and 360 degree discipline, comprising technology and processes to first instigate and then maintain security. Vulnerability Management, System Hardening, Change Control and Breach Detection are some of the absolutely essential components needed – the good news is that this can all be automated and just the ‘need to know’ exceptions reported for investigation. Final words: Get your technology right for the general, everyday security before investing too much time and money into the latest ‘hot’ product. About New Net Technologies is a global provider of data security and compliance solutions. Clients include NBC Universal, HP, RyanAir, Arvato and the US Army. NNT Change Tracker Gen 7™ provides continuous protection against known and emerging Cyber Security threats in an easy to use solution. Unlike traditional scanning solutions, Change Tracker Gen 7™ uses automated File Integrity Monitoring agents to provide continuous real-time detection of vulnerabilities. And ifthe unthinkable happens, immediate notification is provided when malware is introduced to a system or when any other breach activity is detected. Operating at a forensic level within the IT infrastructure, Change Tracker™ works across all popular platforms.
March 10, 2016 Following the publication of the second draft of the Investigatory Powers Bill, has pulled together a summary of the changes that have been made. These relate to recommendations made by the three committees that scrutinised the bill. Privacy Committee recommendations: The Intelligence & Security Committee called for an entire section of the Bill dedicated to addressing privacy safeguards, clearly setting out the universal privacy protections which apply across all the investigatory powers. Key changes: Part 1 now contains a short overview of the safeguards throughout the Bill. This doesn’t go as far as the ISC’s recommendation that protections should form the backbone of the deal. The Home Office has instead simply added the word “privacy” to the subheading and provided a summary of privacy protections rather than an overarching statement recognising the supremacy of privacy. Encryption Committee recommendations: Reports called for further clarity and reassurance on theface of the Bill or within the Codes of Practice that end-to-end encrypted services and products would not be affected by Section 189 notices in the Bill. Key changes: The language on encryption has been amended, section 189 proposing that obligations be placed on CSPs – “relating to the removal of electronic protection applied by a relevant operator to any communications or data” – has been changed. Obligations now apply “to the removal by a relevant operator of electronic protection applied by or on behalf of that operator to any communications or data”. Definitions Committee recommendations: Highlighted the concerns within industry as to the overly broad and confusing definitions of terms such as “data”, “internet connection records” (ICRs) and “related communications data”. Key changes: The definition of the term “data” has been changed in line with the Joint Committee’s recommendation. The new definition makes clear that the term “data” in the revised Bill includes “data which isnot electronic data and any information (whether or not electronic)”. Extraterritoriality Committee recommendations: The bill must complement rather than conflict with the aim of creating an international legal framework for the lawful acquisition of data by government agencies. The Bill should be viewed as an international piece of legislation, with global implications. Key changes: Little has changed, although there are greater and more consistent safeguards on proportionality and conflicts of law for overseas providers, extraterritorial provisions that undermine long term objectives still remain. Internet Connection Records Committee recommendations: Reports expressed concerns about the definitions and technical feasibility of retaining ICRs. The draft Bill contained inconsistent definitions of ICRs that created uncertainty within industry as to their technical feasibility. Key changes: The Bill now has a single definition of ICRs that remains consistent throughout the course of theBill, with references to internet connection records appearing in both the and retention sections of the Bill. About techUK represents the companies and technologies that are defining today the world that we will live in tomorrow. More than 850 companies are members of techUK. Collectively they employ approximately 700,000 people, about half of all tech sector jobs in the UK. These companies range from leading FTSE 100 companies to new innovative start-ups. The majority of our members are small and medium sized businesses.
March 10, 2016 A flaw in the Oracle database listener, if not mitigated, could allow an attacker to take complete control of an Oracle database through an attack known as TNS Poison Attack. This vulnerability is remotely exploitable without authentication credentials. This classic man-in-the-middle (MITM) vulnerability has been published as security alert CVE 2012-1675 and received a CVSS base score of 7.5. It impacts confidentiality, integrity and availability of the database. Joxean Koret discovered this vulnerability in 2008 and publicly disclosed in 2012. TNS Poison Attack vulnerability exploits Oracle listener’s database service registration functionality. Oracle database users connect to the database services through Oracle TNS Listener which acts as a traffic cop. A malicious attacker, residing on the same network as the database, registers a malicious service with the database listener with the same service name as legitimate database service. No credentials are required toregister a database service with the listener. An attacker can use Oracle database software or easily available other tools to register a malicious database service. After completion of the malicious database service registration with the same name as legitimate service name, Oracle listener has two services to choose from – a legitimate service and a malicious service. With two database services available, Oracle listener switches to the load balancing traffic cop mode, directing users alternatively to the legitimate service and the malicious service. At least, 50% of the user sessions are directed to the malicious service. Database user sessions, which are now communicating through the malicious service, can be hijacked by the attacker. An attacker is in the middle. All communication from the users to the database is now passing through the malicious attacker. Attack post stablished. Attacker has full purview of what users are communicating with the database. At a minimum, theattacker can view and steal the data. Additional SQL commands may be injected to broaden the scope or carry out additional attacks. If a database user communicating with the database happens to be a privileged user with the DBA role, then the attacker has complete control of the database. Database compromised. Mission accomplished. TNS Poison Attack is mitigated through Valid Node Checking Registration (VNCR) setting which permits service registration from only known nodes or IPs. Specific mitigation steps depend on the version of the database that you are running as shown below: Oracle Database Releases 12.1 or above: If you are running Oracle database 12.1 or above, then you don’t need to further read this article unless you are just curious. The default Oracle listener configuration in Oracle 12c would protect you against this vulnerability. Although you don’t need to specify VALID_NODE_CHECKING_REGISTRATION_<listener_name> parameter to LOCAL in listener.ora, I would suggest thatyou explicitly do so just to make sure, as shown below: LISTENER_DB = (DESCRIPTION_LIST =    (DESCRIPTION =      (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.100)(PORT=1521))    ) ) VALID_NODE_CHECKING_REGISTRATION_LISTENER_DB=LOCAL This parameter ensures that databases that are on the same server as the listener are permitted to register services with the listener. No remote registration of the services is permitted. If a malicious attacker attempts to register a service with the listener from a remote server, you will see the following error message in the listener log: Listener(VNCR option 1) rejected Registration request from destination 192.168.200.131 12-NOV-2015 17:35:42 * service_register_NSGR * 1182 Oracle clustering solution, Oracle RAC, requires remote registration of services. In order to protect Oracle RAC from TNS poison Attack, you also need to set REGISTRATION_INVITED_NODES_<listener name> to specify IP addresses of the nodes from which remote registration is required.Oracle Database Release 11.2.0.4: If you are running Oracle database 11g R2 11.2.0.4, then you must mitigate this risk through listener configuration. As illustrated above, you need to set VALID_NODE_CHECKING_REGISTRATION_<listener_name> to LOCAL. Alternate values for this parameter are ON or 1 and accomplishes the same objective. The default value for this parameter is OFF, leaving the door open to an attack. As mentioned above, if you are running RAC, then you also need to set REGISTRATION_INVITED_NODES_<listener name> to allow instance registration from trusted/valid nodes. Oracle Database Release 11.2.0.3 or older releases: Before I describe the mitigation for older releases, let me mention that you should not be running Oracle databases 11.2.0.3 or older. Oracle has already de-supported older releases. No security patches are available for older database releases. You should upgrade as soon as possible. Oracle, however, does provide a workaround for older releases through Class ofSecure Transport (COST) parameters. There are three parameters SECURE_PROTOCOL_<listener_name>, SECURE_REGISTER_<listener_name> and SECURE_REGISTER_<listener_name> that can be configured to control registration of services from valid nodes only. Please refer to Oracle documentation for more information. Please note that COST parameters can also be used for Oracle database releases 11.2.0.4 or newer to protect against TNS Poison Attack, but the procedure is more complex and requires additional configuration. What makes this vulnerability still relevant, even after its full disclosure 3 years ago, is that there are many many organizations running various flavors of Oracle database 11g R2 releases such as 11.2.0.3, 11.2.0.3, 11.2.0.4, etc. haven’t yet mitigated this flaw. If you haven’t, you should as soon as possible. About Jay Mehta Jay Mehta currently works as an Information Technology Director at CTIS, Inc. Rockville MD. He has over 25+ years of progressive experience in projectmanagement, security implementation and Oracle database architecture/administration. He specializes in Oracle database security, disaster/recovery and performance tuning. He has led and managed numerous infrastructure projects including the data center move. He holds a master’s degree in Computer Science from Stevens Institute of Technology. His blogs can be found .
March 9, 2016 Building Online Communities: Deeplearning4j We asked Adam & Chris, the founders of   — first commercial-grade, open-source, distributed neural net library written for Java and Scala, with one of the most active communities on Gitter — to share their thoughts, experiences and lessons learned on open source community building. Find out what they say and check out the deeplearning4j channel on Gitter. Tell us about a little bit about yourself and the Deeplearning4j community. How did it all begin? We started building Deeplearning4j in late 2013. Adam had been involved with machine learning for about four years, at that time, and deep artificial neural networks were looking more and more promising. The first network in Deeplearning4j was a restricted Boltzmann machine, since that was the net that Geoff Hinton had come up with back in 2006, which was the turning point in the field. I was working for another startup doing PR and recruiting, and had previously worked as ajournalist, so I took care of the documentation (and still do), since we believed that proper communication was key to making open-source code valuable. What are they main issues discussed in the deeplearning4j channel? The main issues used to be installation. Engineers in the community taught us a lot about how to write clearer instructions, and how to make the code and experience better. If we hadn’t had that feedback loop, Deeplearning4j would be worse. Open source communities are an amazing for quality control! The sooner you fix an issue, the less demands you get from the community about that issue. It’s a great incentive to move quickly. Now the main issue is loading data and neural net tuning. We are working on communicating better about that, and about making the framework better, so that ETL and tuning get easier. Finally, there are a lot of basic questions about machine- and deep-learning. Many software engineers have figured out that deep learning and machine learning arereally powerful tools, so they’re trying to grasp new ideas. We’ve written a lot of introductory material, and we point them to various web pages where those ideas are explained. What common goals do you have as a community? The community is centered around Deeplearning4j and our scientific computing library, ND4J, which powers the neural nets. So we answer questions about how to use the libs, and in the process, we help people understand more about deep learning in general. It’s not a deep learning hotline, unfortunately, so there are some questions we don’t tend to answer. But we do help engineers in the DL4J community build apps and understand how neural nets work. The common goal is to learn about deep learning, and to build cool shit. We’ve only seen the tip of the iceberg in terms of what deep learning can do. So far, we’ve seen huge advances in image recognition, machine translation, machine transcription and time series predictions. By many metrics, machine perception nowequals or surpasses human perception, and that will change society in ways that are hard to imagine. Those changes just haven’t been implemented yet. So the secondary goal of the community is to bring this narrow form of AI into the world, so that it can make a difference. What are the most important factors that you have taken into account while creating and maintaining the community? What factors contribute to the success of your community? Creating and maintaining a community is a huge commitment of time and effort. You have to be available, and you have to try to understand where other people are coming from. They don’t always know the jargon to ask precise questions, so you have to have the patience to figure out together with them what they’re trying to ask, or where they’re stuck. We’re not always as patient as we should be. Being available, making that effort, and offering support for powerful tools like this are a good way to build a community. When the makers of a big projectare available to answer esoteric questions about how it works, that creates a lot of trust, because people know that you speak with authority and that if something is really broken, it’s going to get fixed. There’s a tight feedback loop between the community and the project creators. What are the key challenges that you encounter while managing the community? One of the challenges is: What questions do we care about, and what questions do people need to answer for themselves? If someone has really basic questions about Java, an IDE like IntelliJ, or a build tool like Maven, most of the time they need to figure that out for themselves. Our Gitter channel isn’t the right place to hash through that, although we do help in special cases, because sometimes you need to expand your heap space for neural nets to work. You also have to find a balance between building the community and building the product. Ideally, you’d have a big team with full-time support engineers and the rest of the teamworking on the code base. But most open-source projects have very small teams. There are just a handful of people capable of support, and they’re the ones who also should be fixing bugs and adding features. How do you encourage participants’ commitment and contribution to the community? You create a smart, friendly environment in the community. You remind them you appreciate contributions, and you show them, as best you can, what needs to be worked on. We created top-level files recognizing our contributors, and laying down the rules of the community. We also wrote a , and we now label all issues as bug, enhancement or documentation, so that people can scan the  quickly and explore where they can add something. Tell us a little bit about the time commitment required to set up and establish the community. How much community maintenance is required on an ongoing basis?  is a distributed team, with engineers in Australia, Europe and the US, and Deeplearning4j community members in almostevery time zone. There’s a Skymind engineer watching the Gitter queue probably 12–16 hours out of any weekday. This is a pretty serious commitment, because there are less than 10 of us. It’s not their full-time job, but maybe they’ll be running unit tests and answering questions on Gitter in their downtime. Based on your experience, do you feel that the open source communities have changed and evolved over the past years? If so, how? Open-source is winning the enterprise stack, so it’s a lot more important than it used to be. The biggest organizations in the world are running on open-source software. Linux won the operating system, Hadoop won big data storage. And open-source won because when you do it right, you get better code. More eyeballs mean more uptime. So the size of the OSS community, and the quality of attention that software engineers bring to open-source projects, have both increased over the years. What advice would you give to someone who wants to start an online opensource community from scratch? First, build something neat, something you care about. Focus on building one thing that works. Then, share it with people. They will help you improve it, and they may help you think about what to build next. Don’t do too much big upfront development. Try to scope it so that you can ship in a reasonable amount of time. A few weeks, say. Open-source is valuable because it’s a conversation, and the conversation leads you places, so that you and project evolve in ways you can’t anticipate. Also, by open-source early, you’re increasing your exposure and therefore your chances of getting help. We’ve had amazing developers join the community and the Skymind team. What digital tools do you use to help manage and grow your community? The code lives on , the conversation lives on . There are about 1360 devs on the Gitter channel now, so it’s probably one of the more lively neural net conversations on the planet. Our website is hosted on Github, so the content livesthere, too. We generate a lot of automatic documentation with Javadoc (always a WIP…). We ask people to use Maven as their automated build tool. One of the biggest problems with any software is the install, and Maven helps make that a little easier. You need to constantly try to clear away obstacles, so that people can just use your code and not worry about other stuff. Can you share a success story of a community member that happened thanks to their participation in your channel? For most of the stories, you just had to be there. But in general, a lot of data scientists and come, and they just build something for their companies that works. They’ll come back later and say: “We saw a 200% increase in ad coverage when we made DL4J part of the recommender system.” Another guy built an app with DL4J and then an investor saw it and he raised funds. So that’s all pretty cool. With open source, you’re throwing a rock out into the ocean, and you don’t always hear it hit the water. You can’teven see the ripples. So it’s encouraging when people come back and say “thanks” and tell us how it helped them. That makes it more meaningful. About Gitter is extremely popular among the developer community with over 300,000 regularly active users, popular software communities using Gitter include .Net, Node.js and Meteor.
March 9, 2016 Today’s modern CRM systems are vital to your business’ success. CRM data now holds every aspect of your business’ proprietary information from corporate intelligence to sales data; as well as your customers’, from buying patterns to PII. A data breach to your CRM could be devastating to your organization resulting in lawsuits or irreparable harm to your brand’s reputation and customer trust. With so much at stake, here is what you need to know to protect your CRM. The Value of CRM Data Today’s modern CRM systems contain data that is invaluable. These systems hold significant information about corporate intelligence, financial information, sales data, patient health information, credit card information, banking wiring instructions, and every possible detail about a company’s customer. In fact, a single CRM customer instance can store vast amounts of regulated, confidential and proprietary information. If not properly protected, internal and external bad actors can exploitthis data in a number of ways, including: ID Theft/Medical ID Theft Fraud Nation-state espionage Corporate/competitive espionage False billings Selling data to a third-party We have all heard about the escalating data breaches over the last few years, and we all know that the cost and related consequences of such breaches are quite severe. As per the Ponemon Institute’s recent global study (sponsored by IBM), the average consolidated total cost of a data breach has increased by 23 percent since 2013. “Based on our field research, we identified three major reasons why the cost keeps climbing. First, cyber-attacks are increasing both in frequency and the cost it requires to resolve these security incidents. Second, the financial consequences of losing customers in the aftermath of a breach are having a greater impact on the cost. Third, more companies are incurring higher costs in their forensic and investigative activities, assessments and crisis team management.” (Dr. Larry Ponemon,chairman and founder, Ponemon Institute) When a data breach affects a company, the first area that they tend to check is whether the hackers have been able to get the customer’s financial/ payment details. Companies almost seem to rejoice when they find that these details are safe, and then almost proudly announce to the press that though intruders did manage to sneak into their systems, “however no details were stolen,” almost undermining the value of other data which the hackers may have obtained, including important CRM data. While many data breaches happen from external bad actors, it’s not just hackers, malware writers, nation state attacks or organized crime rings who are looking to steal proprietary CRM data. Hundreds or even thousands of insiders (employees, contractors or other business partner) can have authorized access to a company’s CRM. According to a recent , internal actors were responsible for 43% of data loss, half of which is intentional, half accidental. Customerand employee information were the top two content categories, according to the report. Data Under Attack With access to customer CRM data, cyber criminals can contact customers and build trust with them (through sharing back the customer data that the hackers have obtained). Once the customer is convinced that he /she is interacting with a (perceived) genuine entity, hackers are only too eager to obtain additional data from these customers. This information can then be sold by hackers to interested parties who can then use it for identity theft. When the crime comes to light and customers are finally able to trace the crime to the hacking incident, companies tend to lose the one aspect that customers actually go to companies for in the first place- trust. Apart from identity theft, malware can penetrate an organization through phishing schemes which are sent with infected attachments or links which upon opening can lead to problems. Through or targeted “spear” phishing criminals getaccess to email addresses, company hierarchy information, etc. These criminals then masquerade as upper management executives and send an email to junior employees asking them for a wire fund transfer. (The email may at times ask for a wire transfer to be made to a vendor, with bank details provided not of the vendor, but that of criminal entities). Or they can obtain an authorized user’s credentials to access the CRM and steal the data. According to the 2015 Identity Fraud Study conducted by Javelin Strategy & Research, 12.7 million U.S. consumers were victimized in identity theft with fraud losses amounting to $16 billion in 2014. As per the Bureau of Justice Statistics (BJS), identity theft costs Americans far more than all other property crimes. In case hackers already have access to the user’s credit card information, they may use the customer payment history which they have obtained through the CRM data hack to conduct fraudulent transactions. The transactions are done in such away (withdrawal of small amounts) that the customer is unable to make out if something is wrong until a number of transactions have already taken place. As per the 2014 Javelin Strategy & Research report, the cost of credit and debit card fraud rose to $11bn in 2013. As per BI Intelligence report, the U.S. accounted for 51% of all global payment card frauds in 2013. The company’s CRM data can also contain strategic information of the company, including sales forecasts, prospective customer details, etc. Bad actors, either internal or external, can download customer lists as they are leaving the company or sell the information to ill-intentioned competitors who are more than happy to get sensitive competitor information. Corporate espionage is a growing business today and hackers can command hefty premiums for such information. Data theft trends by internal users continues to increase in damage and studies suggest that more than ever, employees who work on intellectual property projectsbelieve they are entitled to take it. Additionally, departing employees, disgruntled employees, or an employee whose credentials have been compromised by a third-party, can access and download CRM data on their way out and often without detection. In 60% of 150 data theft cases studied in the Recover Report, internal perpetrators stole proprietary information in order to secure a new position with a competitor. In 30% of those cases studied the internal motivation was to use the stolen information to create new business. Annual losses to corporate espionage are estimated to cost 300 billion annually in the US. As per the Brookings Institute: 65+ percent of the companies value, sources of revenue, sustainability and growth lie in information assets, intellectual property (IP) and proprietary competitive advantages. Further, there has never been more regulatory enforcement of privacy and security standards by industry and across the globe. Countermeasures Some basics steps that can helpprotect customer data are the following: In possession of sensitive customer information and records, companies can install sound alarm systems which can detect data breaches and take immediate counter measures, including those which can help in shutting down the breach immediately Companies can use efficient encryption systems, as well as identity and access management systems which grant access rights strictly on need basis. The employees who no longer have a need for access rights can be ejected from the system on a regular basis. Additional user authentication layers can be used to protect the data Cloud-based CRM systems with IP address range restrictions can be used Enabling the audit log function of your CRM. The lack of automated audit logs makes monitoring impossible and a forensic investigation time-consuming and expensive. The lack of audit logs also leaves a void in all security, certifications and regulatory requirements that relate to audit controls. Continuous monitoringwith alerts and filtering: User activity monitoring and alerts provides some peace-of-mind, as well as visibility into user behaviors that are suspicious. Highlighting the importance of data protection can be done regularly in internal company forums and can be made as an important part of the company’s internal briefing. As per surveys, people/employees who use CRM applications internal systems account for more than 75 percent of the breaches which occur. About Avani Desai Avani Desai is a Principal and Executive Vice President at , with over 13 years of technology and privacy experience. About Kurt Long Kurt Long is the Founder and CEO of ®, a leading global provider of solutions which expand trust in mission critical applications such as Salesforce, Electronic Health Records and cloud-based applications.
March 9, 2016 There are a lot of security myths about cloud security needed to be clarified. One is that a lot of people think that as soon as they give something to the cloud, they do not have to worry about compliance with security. That is absolutely not correct. If you are a business, your clients are looking at you for security. Whether you go to the cloud or you do it internally using your private infrastructure, that doesn’t change your responsibility in terms of who owns compliance to security. There needs to be a very clear demarcation line. The second myth has to do with black and white, that either cloud is insecure by default or cloud is secure by default. None of that is correct. It really depends on the controls. You’re not reinventing or eliminating any controls. You’re just moving where the controls reside and changing who owns the controls. Cloud by default is neither insecure nor secure, end of the day it’s how everything is implemented and how the data flows. Thethird myth states that data is encrypted all the time. It really depends, and that’s a big myth. Some cloud service providers encrypt your data; some do not. You need to find and understand how your data is handled. Does your service providers have the key or does not. It all depends on the model of the cloud. Whether you are at box.com or Dropbox or Salesforce, it all depends on various processes that they’re doing on your data and whether your data is really or not. The next myth: “It’s my data, I’ll get it back when I need it.” It’s not necessarily, it depends on where typically the data has been residing. And there are country specific laws that you need to know and understand how to get your data back. There are a lot of other myths about cloud, and I will touch upon some of them bellow. There are plenty of cloud models and services: IaaS, PaaS, SaaS. There are even the different layers of services that cloud service providers offer. And then you have the modules: private cloud,public cloud, hybrid cloud. One needs to take a decision whether the data stays completely in a public cloud or in the datacenter, or it’s a hybrid model. One needs to understand and manage the risks around going into the cloud in terms of planning, management. considerations, whether it is compliance, identity and access management, service integrity, endpoint integrity, information protection, IP specific protection, all needs to be taken into consideration no matter how you are using cloud and for what reasons. There are various use cases for the cloud: website hosting, disaster recovery, test and development, seasonal capacity, eCommerce, etc. So again at the end of the day, you need to do a proper assessment, whether it’s a vendor assessment, what model to select, how vendors deploy various architecture, what are the security ramifications of going to the cloud and last but not least, the financial analysis. One needs to go through the full cycle. And do not follow blindly, ifyour competitor has gone to the cloud, should I go to cloud. Maybe, maybe not, your and your competitors may differ. Cloud is just an enabler. It really depends on what are you trying to provide to your clients, to your organizations. Are you using the cloud for apps, for transportation, for advertising? All these various scenarios depend on whether you are doing a cloud or not. The final takeaways: cloud can be a great enabler, but at the end of the day, one needs to understand that s security breach in an environment, especially a cloud environment, can have a huge negative impact on your reputation and finances. Cloud is not one-size-fits-all. One really needs to understand the details picking a specific cloud model. You do not start with: “I want to do cloud.” You have to start with: “What do I want to achieve, what’s my end goal, is cloud the right model for me.” About David Balaban David Balaban is a computer security researcher with over 10 years of experience in malwareanalysis and antivirus software evaluation. David runs the project which presents expert opinions on the contemporary information security matters, including social engineering, penetration testing, threat intelligence, online privacy and white hat hacking. As part of his work at Privacy-PC, Mr. Balaban has interviewed such security celebrities as Dave Kennedy, Jay Jacobs and Robert David Steele to get firsthand perspectives on hot InfoSec issues. David has a strong malware troubleshooting background, with the recent focus on ransomware countermeasures.
March 9, 2016 When most people are asked to think of a wall of fire, they might think of the pyrotechnic scene on any first-rate metal band stage, but unfortunately, firewall protection services aren’t quite like all that. That’s not to imply, though, that they aren’t terribly important or that meltdowns caused by failing to ensure proper firewall protections can’t be as damaging as a direct blast of a flame thrower. Firewalls adopt their name and function directly from physical structures that stand between danger, often fire, and fragile stuff, like your face. Computer firewalls do bear some similarity to their real-life counterparts. They work to protect users and their information, not from thermal energy, but instead from the scum and villainy, which occupy the wretched hives of the open Internet. Generally speaking, there are two layers of firewall protection, which operate in conjunction with both hardware and software, to carry out a given set of rules. Hardware firewallsolutions are automatically built into the routers that most of us are using today. These work by tagging all outgoing traffic from a private network, like the one in our home, with a particular network ID that is then also attached to any corresponding incoming traffic. This allows the router to determine the origin of incoming packets, blocking any transfers, which weren’t initiated from behind the firewall. It also prevents files from being downloaded without a user’s knowledge. It can stop first-step intrusions known as port scan attacks, letting us feel safe that the update for our favorite new game is what it says and not some piece of spyware that will constantly spam us with adverts for pills that improve performance Software-based protections function by monitoring the integrity of flowing traffic processes through variables such as incoming and destination IP addresses, transfer times or download sizes, and killing connections that don’t need expectations. These areadvantageous because they monitor outgoing traffic as well as incoming, blocking programs such as IP-spoofing ones, from attacking individual machines once they’ve infiltrated a network. These are what allow us to transfer files between our friends without worrying that a third-party has attached something along the way or feel safe when continually transferring packets during things such as an online gaming session, knowing that the firewall will detect any un-approved packets. Often the large-scale data thefts we hear about come as a result of some entity which has gone and painted a giant target on their back by not implementing strong enough active , allowing unwanted information transfers to remain disguised as authentic ones. While in theory, firewall protections work as seamlessly as their physical predecessors, in practice, these types of solutions can require a little bit more attention than your average wall made of bricks Pretty well anyone who has ever installed a new gameor component has run into that annoying Windows prompt. The one that is asking them if you’re really sure you wish to connect and update the software. While they’re not all good news, no matter how arbitrary a warning might seem or how annoying its accompanying beep is, these grievances are nothing compared to the sound your customers and partners will make when all of their credit cards and other information has been stolen. About David Balaban David Balaban is a computer security researcher with over 10 years of experience in malware analysis and antivirus software evaluation. David runs the project which presents expert opinions on the contemporary information security matters, including social engineering, penetration testing, threat intelligence, online privacy and white hat hacking. As part of his work at Privacy-PC, Mr. Balaban has interviewed such security celebrities as Dave Kennedy, Jay Jacobs and Robert David Steele to get firsthand perspectives on hot InfoSec issues. Davidhas a strong malware troubleshooting background, with the recent focus on ransomware countermeasures.
March 8, 2016 Kaspersky Lab experts have detected Triada, a new Trojan targeting Android devices that can be compared to Window-based malware in terms of its complexity. It is stealthy, modular, persistent and written by very professional cybercriminals. Devices running the 4.4.4. and earlier versions of the Android OS are at the greatest risk. According to the recent Kaspersky Lab research on , nearly half of the top 20 Trojans in 2015 were malicious programs with the ability to gain super-user access rights. Super-user privileges give cybercriminals the rights to install applications on the phone without the user’s knowledge. This type of malware propagates through applications that users download/install from untrusted sources. These apps can sometimes be found in the official Google Play app store, masquerading as a game or entertainment application. They can also be installed during an update of existing popular applications and are occasionally pre-installed on the mobile device.Those at greatest risk include devices running 4.4.4. and earlier versions of the Android OS. There are eleven known mobile Trojan families that use root privileges. Three of them – Ztorg, Gorpo and Leech – act in cooperation with each other. Devices infected with these Trojans usually organise themselves into a network, creating a sort of advertising botnet that threat actors can use to install different kinds of adware. But that’s not all… Shortly after rooting on the device, the above-mentioned Trojans download and install a backdoor.  This then downloads and activates two modules that have the ability to download, install and launch applications. The application loader and its installation modules refer to different types of Trojans, but all of them have been added to our antivirus databases under a common name – Triada. Getting into the parental Android process A distinguishing feature of this malware is the use of Zygote – the parent of the application process on an Androiddevice – that contains system libraries and frameworks used by every application installed on the device. In other words, it’s a demon whose purpose is to launch Android applications. This is a standard app process that works for every newly installed application. It means that as soon as the Trojan gets into the system, it becomes part of the app process and will be pre-installed into any application launching on the device and can even change the logic of the application’s operations. This is the first time technology like this has been seen in the wild. Prior to this, a using Zygote was only known of as a proof-of-concept. The stealth capabilities of this malware are very advanced. After getting into the user’s device Triada implements in nearly every working process and continues to exist in the short-term memory. This makes it almost impossible to detect and delete using antimalware solutions. Triada operates silently, meaning that all malicious activities are hidden, both fromthe user and from other applications. The complexity of the Triada Trojan’s functionality proves the fact that very professional cybercriminals, with a deep understanding of the targeted mobile platform, are behind the creation of this malware. Triada’s business model The Triada Trojan can modify outgoing SMS messages sent by other applications. This is now a major functionality of the malware. When a user is making in-app purchases via SMS for Android games, fraudsters are likely to modify the outgoing SMS so that they receive the money instead of the game developers. “The Triada of Ztrog, Gorpo and Leech marks a new stage in the evolution of Android-based threats. They are the first widespread malware with the potential to escalate their privileges on most devices. The majority of users attacked by the Trojans were located in Russia, India and Ukraine, as well as APAC countries. It is hard to underestimate the threat of a malicious application gaining root access to a device. Theirmain threat, as the example of Triada shows, is in the fact that they provide access to the device for much more advanced and dangerous malicious applications. They also have a well-thought-out architecture developed by who have a deep knowledge of the target mobile platform,” said Nikita Buchka, Junior Malware Analyst, Kaspersky Lab. As it is nearly impossible to uninstall this malware from a device, users face two options to get rid of it. The first is to “root” their device and delete the malicious applications manually. The second option is to jailbreak the Android system on the device. Kaspersky Lab products detect Triada Trojan components as: Trojan-Downloader.AndroidOS.Triada.a; Trojan-SMS.AndroidOS.Triada.a; Trojan-Banker.AndroidOS.Triada.a; Backdoor.AndroidOS.Triada. About Kaspersky Lab is one of the world’s fastest-growing cybersecurity companies and the largest that is privately-owned. The company is ranked among the world’s top four vendors of security solutions forendpoint users (IDC, 2014). Since 1997 Kaspersky Lab has been an innovator in cybersecurity and provides effective digital security solutions and threat intelligence for large enterprises, SMBs and consumers. Kaspersky Lab is an international company, operating in almost 200 countries and territories across the globe, providing protection for over 400 million users worldwide.
March 8, 2016 Tenth annual survey also explores evolution of internal auditing over the past decade According to Arriving at Internal Audit’s Tipping Point Amid Business Transformation, released by global consulting firm Protiviti, organisations are more likely than ever to evaluate cybersecurity risk as part of their annual audit plans. Nearly three out of four organisations (73 percent) now include cybersecurity risk in their internal audits, a 20 percent increase year-over-year. While there is a clear need among most internal audit groups to strengthen their ability to address cybersecurity risk, the survey found that these capabilities are much stronger for top-performing organisations, particularly those in which the board of directors has a high level of engagement in information security risks. “The rapidly evolving sophistication of is one of the hottest topics of today’s digital age ,” said Mark Peters, managing director, internal audit, Protiviti. “Our survey found that whenit comes to assessing cybersecurity measures and the auditing processes, the highest performing organisations have audit committees and boards who actively engage with the internal audit function during the discovery and assessment of these risks. It’s still apparent, however, that further work is essential to build out these internal audit capabilities in order to focus on the right areas.  Companies must take stronger action to set these imperatives into place.” More than 1,300 internal audit professionals, including more than 150 chief audit executives (CAEs), participated in Protiviti’s 10th annual survey to assess the top priorities for internal audit functions in the coming year. Cybersecurity Risk Capabilities and Best Practices During the past decade, the importance of cybersecurity in internal audit functions has evolved from a simple IT risk to a serious strategic business risk, an issue that now must be addressed regularly by executive management and the board of directors.In fact, 57 percent of companies surveyed have received inquiries from customers, clients and/or insurance providers about the organisation’s state of cybersecurity. Protiviti’s survey found that there are two critical success factors when establishing and maintaining an effective cybersecurity plan: A high level of engagement by the board of directors in information security risks; and Including the evaluation of cybersecurity risk in the current audit plan. Companies with at least one of these success factors in place have a stronger risk posture to combat cyber threats. For example, 92 percent of organisations with a high level of board engagement in information security risks have a cybersecurity risk strategy in place, compared to 77 percent of other organisations. Similarly, 83 percent of companies that include cybersecurity risk in the annual audit plan have a cybersecurity risk policy, versus 53 percent that do not include cybersecurity risk in their audit plans. Ten Years ofInternal Audit Over the past ten years, internal audit professionals have assessed their competency in more than thirty areas of audit process knowledge and general technical knowledge in Protiviti’s survey. Areas that continue to surface as top priorities year-over-year include: ISO 27000, data analysis technologies, various areas of auditing IT, technology-enabled auditing and fraud risk management. As for 2016, technology issues dominated the priority list for internal auditors. The top 10 priorities for internal audit are: ISO 2700 (information security) Mobile applications NIST Cybersecurity Framework GTAG 16 – Data Analysis Technologies Internet of Things Agile Risk and Compliance ISO 14000 (environmental management) Data Analysis Tools – Statistical Analysis Country-Specific ERM Framework Big Data/Business Intelligence “With most of the top priorities identified relating to IT risks, it’s clear that auditing IT remains important to internal audit functions and to the state of anorganisation’s overall risk profile,” added Peters. Companies are trying to ensure business-as-usual systems are secure and effective as well as working to drive change through the introduction of new technologies, greater digitisation and mobilisation of internal and customer-facing systems.  These factors, coupled with the increasing are driving internal audit to increase its IT audit capabilities each year and raising technology issues up the priority list for internal audit.  It is essential for internal audit functions to act now in order keep pace with this change’’ About Protiviti   is a global consulting firm that helps companies solve problems in finance, technology, operations, governance, risk and internal audit, and has served more than 60 percent of Fortune 1000 ® and 35 percent of Fortune Global 500 ® companies. Protiviti and its independently owned Member Firms serve clients through a network of more than 70 locations in over 20 countries. The firm also works withsmaller, growing companies, including those looking to go public, as well as with government agencies.
March 7, 2016 It was way back in 2011 when I spoke of the key security challenges on the CISO’s radar in the basic forms of: Malware The Insider Threat’s & Spam Complimented of course by other generic security challenges which appear on a daily basis. Way back in 2011 I did acknowledge that whilst these were nevertheless important in the overall scheme of the Security Mission, wondered if they did consume far too much interactive intervention and security bandwidth with responding to the manifestation of active compromise and security breaches – with much focus on the reactive, rather than the proactive. At that time I was also questioned the value of, what were [are] at times the association of those innate Security Dashboards and Balance Score-Card’s which represent the anticipated snap-shot of real-time and real-life exposure mitigation and ‘management’ to be presented to the executive [tick-box-security], and I wondered if something was being missed at the lower level of thesecurity challenge. However, now four and a bit years on, with the benefit of hindsight, I am realising that the manifestations of the unknown unknowns of insecurity seem to have been allowed to evolve, and to gain ground in the adverse landscape of Cyber Crime, and the all thigs offensive mission strands for. In my experience since the 2011 observations, I can again fully attest with proof that whilst the aforementioned areas of security management are a common find’s, they have sadly been updated by manifestations of newly-grown insecurities, and the landscape of adversity is now still outstripping the balanced approach of acceptance of compliance/governance which is being driven out of tower like security missions which still seems to be missing the point – which has not evolved the required level of Poacher/Gamekeeper imaginative mind-set – allowing real-time threats to expose the business, clients, and assets alike. In the wake of the known threats which have been encountered todate, some of the unknown unknowns have now been promoted to the known unknown status. These being complimented by the advent of extreme levels of successful attacks in the form of high-consumption attacks, multiples of successful Ransomware incursions, Cyber Attacks, and Hacking against high gain, prominent targets who spend what may been considered a fortune on their failing defences – and yet they are still exposed! The problem may well be created out of the low level of imaginative direction which comes from those who are the incumbent of the organisations security strategy – playing by the rules of engagement behind the shield of Governance/Compliance, and the good old ISO/IEC 27001 as the bible to fight off all Cyber Ill’s – a little like David being given a pencil and clipboard to go fight Goliath! It is time to start to apply enhanced levels of imaginative hostile and offensive thinking, where imagination represents the most valuable armament in the armoury of the securityprofessional, and hopefully the CISO. Levels of imagination which will manifest in offensive thinking which seeks to understand the unknown unknown areas of subliminal and invisible threats. Such as the exposure presented by the much-tolerated OSINT capabilities, metadata leakage, and other such hidden forms which so often allow the would be attacker to gain a valuable insight into the belly of the organisation. For example take the high profile bank who are so exfiltration enabled, they knowingly publish, and make available high value objects of intelligence on a daily basis, making the job of any hacker, or other such cyber-miscreant a much easier task to effect. However, sadly this high profile organisation are not alone in this space, with many others following on their cyber-tails, with their logical-ass hanging out of the open window. And on the subject of poor security, let us not forget that even in this day of BWYW [Bring Whatever You Want] to work, where there are still manyorganisations who simply do not understand, and still support the introduction of the known threat of that little thumb drive. But then when you look to some organisations in the Oil and Gas Industry who have been aware such introduced devices are carrying Hacking Tools, and the occasional form of low-grade [acceptable] Malware which are actually ignored, one may well start to feel the onslaught of professional frustrations creep in! Not a case of ‘Who Dares Wins’, but more a circumstance of ‘Who Care’s who loses’. The fundamental bottom line is still the bad guys are winning with the tool of evolved imagination – and they are entering battle ground with many security management types are, on occasions completely devoid of what amounts to the ability to demonstrate Cyber Defensive thinking – allowing risks to populate, manifest, and take their bite out of the soft posteriors of the company there are incumbent to protect – and before you start to shout at me with a ‘how dare he’ evensuggest such a thing’ – may I pre-empt the fury and state, ‘he dares, because he has seen on an all to regular occasions’. 2016 is the year in which we should recognise that Cyber is starting to look like a dirty word. It is a word which is associated with the world of insecurity, rather than that of security, and it is a word which has entered the vocabulary of the public with an adversarial slant. It is in the year of 2016 in which we must recognise that it is the responsibility of those in the Profession of Digital Security that we are potentially the group holders of the keys to global stability – and ‘if’ we are going to do it, we ‘must’ assure we do not cut corners and do it ‘right’. If not, there is simply no point to even trying! About John Walker Visiting Professor at the School of Science and Technology at Nottingham Trent University (NTU), Visiting Professor/Lecturer at the University of Slavonia [to 2015], Independent Consultant, Practicing Expert Witness, ENISA CEI ListedExpert, Editorial Member of the Cyber Security Research Institute (CRSI), Fellow of the British Computer Society (BCS), Fellow of the Royal Society of the Arts (RSA), Board Advisor to the Digital Trust, Writer for SC Magazine UK, Originator of DarkWeb Threat Intelligence, CSIRT, Attack Remediation and Cyber Training Service/Platform, Accreditation Assessor and Academic Practitioner and Accredited Advisor to the Chartered Society of Forensic Sciences in the area of Digital/Cyber Forensics. Twitter
March 7, 2016 There is no such thing as static security – all security products become vulnerable over time as the threat landscape evolves. Any ‘deploy once, update infrequently or never’ security solution is inherently flawed. Which is why every switched on organisation routinely updates its anti-virus and anti-malware solutions, hardens its infrastructure and updates its policies. So why is SIP security still based upon a one off implementation of a Session Border Controller (SBC)? From denial of service attacks to toll fraud, SIP trunking is inherently vulnerable. And in an era of near continuous security breaches, that vulnerability continues to change and escalate. No technology or communications environment is static – and SIP security should be treated with the same urgency as anti-virus and infrastructure hardening. Paul German, CEO, , insists it is time to think differently about SIP security – before it is too late. The breaches go on Another day, another security breach. The theft of 15 million T-Mobile customers’ data from credit checking firm Experian, the exposure of the personal data of US based Uber drivers, the hack of Samsung Pay, the denial of service (DoS) attack on HSBC – all of these events have occurred within very recent history.  The scale of hacking and data theft is unprecedented and new are continually being found and compromised. Today’s threat levels are high and, given the constant publicity and public scrutiny, only the most foolhardy organisations would ignore the need to safeguard infrastructure. Yet in what is a continually changing and evolving threat landscape, inconsistencies in security policies and practices are creating new vulnerabilities. Why, for example, are organisations totally committed to continuously updating anti-virus (AV) and solutions yet will happily install a Session Border Controller (SBC) to protect VoIP calls and never consider it again? If there is one thing that every security expert will confirm, it isthe continuously changing nature of the threat landscape – and a security product’s ability to safeguard a company declines from day one. In an era of near ubiquitous VoIP calls, when companies are routinely falling prey to toll fraud and denial of service attacks, it is time to ask why network providers and security vendors continue to downplay the vulnerability of SIP. Static Fallacy The deploy once, update many times model adopted by AV, web security and email security over the past two decades is well established and organisations recognise the clear vulnerabilities associated with failing to update routinely.  Companies understand the importance of buying not just a security product but a vendor’s continuous research into emerging threats and a commitment not only to routine updates but also emergency patches in response to new hacking vulnerabilities.  In effect, when it comes to a continuously changing security situation, organisations recognise the need to buy products andsolutions that utilise research, existing users and community to stay ahead of the hacker. So why are other aspects of the communications network and infrastructure, including routers and switches, still subject to the static – implement once, update never – approach? Does this mean these areas are impregnable once protected? While some vendors may like to imply this is the case – it is not.  Toll fraud and denial of service cost businesses £25.5 billion every year globally – £1.2 billion in the UK alone¹,  and, again, the threats continually evolve. For example, hackers are routinely undertaking port scanning in the hope of finding a way in – any organisation that has left SIP ports open is likely to be found out, and compromised, very quickly. The scale of attack may surprise UK businesses: security consultancy Nettitude’s recent report revealed that attacks on VoIP servers represented 67% of all attacks it recorded against UK-based services – in contrast, SQL was the second mostattacked service, accounting for just 4% of the overall traffic. With 84% of UK businesses considered to be unsafe from hacking according to NEC, the implications are significant and extend far beyond the obvious financial costs of huge phone bills or the increasingly common Telephone Denial of Service threats, also known as ransom events used to extort money. From eavesdropping sensitive communications with malicious intent such as harassment or extortion to misrepresenting identity, authority, rights and content – such as modifying billing records – or gaining access to private company and customer contacts, hackers are increasingly looking for more than basic call jacking. Ahead of the Game The cyber security market is set to be worth $170.21 billion by 2020² – with a strong bias towards securing email, desktops and web services. Yet while the adoption of VoIP is now at record levels, SIP security investment remains low.  When hackers are looking for the easiest way in – this lackof protection is an open invitation. The reality is that SBCs provide an entry level of security – but, like any other security product, they need to evolve. And that means SBC providers need to be making a continuous investment in security research and providing routine updates in order to deliver a reactive, real time and intelligent level of security to protect against these new world threats. Organisations – and providers – need a change of attitude to SIP security.  In a constantly evolving threat landscape no one knows what is coming and the onus is on both vendors and businesses to ensure they are in the best possible position to both safeguard data and protect against expensive toll fraud attacks. The constant change process has become a fundamental aspect of successful security – and that needs to be applied across the board, not just to AV. Static security does not work; it is time for the SIP security industry to face up to its responsibilities and embrace a process ofcontinual update that will truly safeguard organisations tomorrow – not just today. About VoipSec was founded with the mission to simplify the complicated and costly area of VoIP (Voice Over Internet Protocol) security. VoIP is a key tool for businesses in today’s environment, yet due to the cost of traditional VoIP security, many organisations are leaving their networks open to risks such as Voicemail Hacking, Toll Fraud and Telephony Denial of Service (DoS). VoipSec’s products have been designed to run in virtualised environments, eliminating the need for bulky and expensive hardware and rapidly decreasing the time it takes to deploy security solutions for an organisation’s voice calls. VoipSec’s EasySBC is the first module in the VoipSec Security Platform, which provides features such as remote working facilities, quality monitoring tools, as well as advanced security capabilities. Using VoipSec’s EasySBC, businesses can take the first step to ensure the security of theircommunications infrastructure whilst being able to leverage the benefits of VoIP for voice, unified communications, and customer experience. EasySBC can be downloaded and deployed on a virtual server rapidly with a relatively low set up charge. The company is based in Milton Keynes and was founded by Paul German, an expert in bringing new technologies to market along with small and medium sized businesses.
March 7, 2016 Nearly were released every day in 2014, with no signs of slowing down, according to Symantec’s Internet Security Threat Report. Malware, worms and other viruses can spread through a company’s network like wildfire. Getting your system and network back up and running only scratches the surface of expenses. Malware can cause data breaches and compromise customers’ security and hold you liable for damages. According to the 2015 Cost of Data Breach Study’s global analysis, the average total cost of a data breach for participating companies in the study increased 23 percent to $3.79 million. The idea of data isolation isn’t a new, but it has expanded beyond simple and separate servers and networks into a more sophisticated medium. Take a look at what data isolation is all about and why it matters. Isolate your security zones Ask yourself how many of your workstations and servers need to be connected. Isolating your data as much as possible can keep malware from spreading andcontain it to one unit. Think about how SaaS platforms like Salesforce work. Their customers other than their own. Creating sub-accounts can also help isolate data. For example, when a customer uses your billing portal, they are essentially as everyone else, but it exists separately from the rest of the network. Research your cloud provider Whether you’re using a SaaS platform for complex marketing or a cloud provider to store files and data, you need to ask questions. Ask about their safety protocols, how data breaches are handled and what percentage of their team is dedicated to security. Find out how your data is isolated and separated and who else has access to your information. Automatic computer backup and DIY cloud storage has become increasingly popular over the years. But do you know what’s going on with your data? Find out how your files are encrypted and stored, and don’t be afraid to ask for credentials. For example, completed a SSAE 16 Type 2 audit and has ISO 27001certification. Ask about air gapping Air gapping is a simple technique just about anyone can do to ensure an extra layer of manual security. Government and military installations as well as big businesses use the method to further lock down their security. The concept is simple. Either turn off an unused server altogether or leave it on but without being connected to the Internet. That server can be part of your overall network, but will need manual manipulation to get any malware on it. Restrict access Manually restrict what devices and computers can connect to your network and access information. BYOD is an acronym for “Bring Your Own Device,” but some refer to it as “Breach Your Own Data”. Allowing an influx of personal devices to enter your network requires additional security protocols and greater access restriction. Another issue is taking company-issued devices home and using them to surf the Web or make online purchases. That activity can further expose your network to risks.If you’re going to employ a BYOD policy,  and set up permissions for what personal devices can access. Consider requiring employees to leave devices in the office or restrict what activities can be done on those devices when using them from home.
March 4, 2016 The White House is looking to hire its first-ever chief information security officer (CISO). There’s little doubt that appointing a Federal CISO is a long overdue response to a recurring problem: the inability to properly secure government systems and sensitive data. The list of government agencies experiencing security failures is lengthy, from the Office of Personnel Management attacks in 2013 and 2014, to the State Department email system in 2014, to the latest attack on Department of Justice and Homeland Security computer systems. According to the job posting for the newly created position, the Federal CISO will be in charge of federal “cybersecurity policy and strategy,” and have “oversight of relevant agency cybersecurity practices and implementation across federal information technology systems.” It’s encouraging to see so much emphasis placed on the critical role of security policy. Effective, agency-wide IT security policies serve as the backbone to anysuccessful security program, as they provide a framework and support mechanism for managing technologies, maintaining order and achieving organizational goals. They also help minimize threats, prevent security breaches and can assist employees in effectively managing risks. But filling this role will be no easy task, especially considering the current IT security skills gap facing the industry. A CISO must take a holistic approach to managing a security team, creating an atmosphere that challenges and recognizes the security team while taking stock of the skills and the tools they have at their disposal. What would my advice be to the person who ultimately lands this job? Here are five things to consider: Be a technologist. A CISO should be a person who can come up with real-world, reliable ways to protect networks, because he or she knows exactly how hackers break in. That requires a deep understanding of the motivations, skill level and methodology of hackers. Recognize that thehacker’s most common internal target isn’t the CEO – most likely, it’s someone within the IT organization, or someone who is the gatekeeper of the most sensitive information, such as human resources. In many cases, cybercriminals go after the weakest link in the organization, which means the CISO must build a policy that protects the most vulnerable stakeholders on the network. Defining the personas of the network’s enemies should inform a CISO’s security policies and strategies. Be a futurist. One thing is certain with each new governmental data breach: doing things the way they’ve always been done isn’t the answer. We’re at a point in time where we’re seeing profound changes in the way business and IT are operating. As we ride this digital disruption wave, technologies like cloud and software-defined networking are forcing organizations to look at cybersecurity, risk and compliance in a new way. The incoming Federal CISO must understand that these are not small, incremental changes.They will require a fundamental transformation in some of the core foundations of . Be a realist. Clearly, outsider threats at the federal level are a huge concern, with nation state attacks from China, Russia, North Korea and the Middle East escalating by the week. But realists know that a large amount of blame for cybersecurity failures can be placed directly on the network’s own users and managers. The Edward Snowden incident illustrates this with painful clarity. The ultimate insider threat, Snowden exploited the government’s poorly created and enforced security policies, inadequate system structures and visibility, haphazard oversight, and minimal education on best security practices. This made it easy for him to gain unfettered access. The incoming CISO must assume that a breach has already occurred — and that poor user behavior and poorly maintained systems are likely to blame. Be vigilant. While many CISOs spent a majority of their time worrying about preventing the nextzero-day attack, Gartner’s research shows that 99 percent of cyberattacks are based on known vulnerabilities in vendor software or hardware. In other words, cybercriminals don’t need to re-invent the wheel to get results. That’s why attack vectors such as spear phishing are still being used – they work. The Federal CISO must resist overemphasizing zero-day defenses, and instead build out a comprehensive security policy focused on vulnerability management and patching, as well as an agency-wide policy on network segmentation, regulatory compliance, and cloud security. Be humble. CISOs need to admit that they don’t have all the answers. This means evaluating and accepting the areas in which they aren’t delivering, and then make the right improvements. Be honest and ask the hard questions, such as, “Is is our technology truly solving a challenge, or is it causing more problems?” They also need to accept feedback and recommendations from their teams about the best approach and tools tofill in the gaps where things aren’t working well. There’s no question that the incoming Federal CISO will have a huge workload. The role will obviously require a lengthy resume of security and IT experience. But it also calls for someone who is a visionary, with an eye to the future of technology. Most of all, a good CISO must marry this experience and spirit of innovation with the business goals of the organization. It’s essential for CISOs to lead the charge, driving innovation as needed, while reducing complexity wherever possible. About Ofer Or As Vice President of Products, Ofer Or is responsible for leading product strategy. With over 20 years of experience in high-tech and network security, Ofer has an extensive background in developing innovative products which have had a profound market impact. Previously, Ofer served as Director of Research & Strategy at Tufin. Prior to Tufin, Ofer was Senior Product Line Manager at Check Point Software Technologies (CHKP) where he led CheckPoint Security Management products and Check Point Security Appliances.  Ofer held marketing and technical positions at Check Point (CHKP), Microsoft (MSFT), Amdocs (DOX), and served in an elite computer unit in the Israel Defense Forces (IDF). Ofer holds a BA in Political Science and Sociology from Bar-Ilan University, an MBA from INSEAD University, and an MA in Law from Bar Ilan University.
March 4, 2016 Over the years, cloud applications have become more of the norm at organizations rather than the exception. The cloud is no longer the little sibling of on-premises applications. According to a report by Allied Market Research, there has been a huge growth in adoption with still more than a 30 percent growth predicted in the next four years. There are many reasons for this growth, including employees more frequently working from home or on the go and needing applications that they can access from anywhere at any time. As the cloud market continues to evolve and grow, there needs to be methods in place to protect these cloud applications and ensure security of the organizations network. While cloud applications are convenient for access from anywhere, the organization needs to ensure that only the correct people can gain access to the appropriate systems. There also needs to be methods in place that stay in the forefront of any attacks by hackers to steal secureinformation, whether it be from outside intruders or from employees within the organization. What are the potential security risks? The most common issue is when an organization begins to use numerous cloud applications it becomes difficult to ensure that employees have the correct access to cloud applications and data. Users may have access to systems and applications that they shouldn’t, leaving the company’s data non-secure. For example, the most common access mistakes are when an employee starts at an organization and is given too many rights, or when the wrong people give them access over time. Then there is the issue of password management, especially since it is very common for users of cloud applications to be working outside of the company’s network from home or while traveling. For example, think of an employee who is on the go and in a hurry. They need to log into an application on their smartphone while traveling and find themselves struggling with remembering and enteringall of their passwords for each application. So, what does the employee do? They either keep their password in notes on their phone or they write it down and keep it with them, neither of which is secure at all. Cloud Identity and Access Management Growth As with cloud applications, cloud identity and access management solutions have grown greatly over the years. This only makes sense, since there needs to be solutions in place to manage these expanding applications. Cloud IAM solutions allow the organization to ensure security and easily manage the applications. How? Just as with in-house applications, those hosted in the cloud need to be managed properly so that, as mentioned, only the correct people have access. Many solutions are available for access management for in-house applications, but as the cloud has grown many of these have evolved to work seamlessly with cloud applications as well. This allows the organization to ensure correct access for in-house and cloud applicationsfrom one source. The first issue a cloud IAM solution assists with is setting up correct access from the beginning. Since provisioning employee accounts in all applications, including cloud applications, is time consuming, often a template account is used for the new employee, copied from someone in a similar position. This then leads to the employee often accumulating rights, which they should not have. Basing rights on the different roles within the organization, specific access profiles can be set with an IAM solution. When the employee is added to the source system, depending on their role, their access rights and accounts in each application are automatically generated and set up for them. An email can then be sent to their manager with all of their access rights and accounts. If for any reason this is incorrect the manager can then easily edit the employees account. Another access issue with cloud applications is that employees often wrongly obtain access rights over time. Eitherthey request access from someone who does not have the authorization to give it or they borrow someone’s credentials. This situation can be prevented with an IAM workflow. A workflow can be set up by the organization so that only the correct authorized managers can give access to secure applications. For example, if an employee needs access to a certain secure application for a project, they can easily make the request through a portal. The request is then sent to the appropriate manager, who can either accept or deny the request. If needed, there can also be several levels of approval required. This ensures that only the correct authorized people are giving access rights. Passwords for cloud applications also need to be protected without interfering with convenience, one of the main benefits of cloud applications. One way this can be achieved is with web single sign-on solutions. These types of solutions allow users on the go to login with one single password to access a portal of all
integrity of the device as a whole. Dealing with threats as they occur is crucial to ensuring adequate security for your mobile device.” Krey advised: “While methods such as two-factor authentication can help to an extent, if the malware has been designed to target banking applications – as it is suspected MazarBOT has – there’s no second line of defence. Instead of using crutches such as antivirus or two-factor, it is vital that security is developed at the level of the application itself. “As it stands, the responsibility for applications has been diffusely passed between Android developers, app developers and, finally, the end user. Time and again, this dynamic has been proven ineffective and a rethink of traditional means of protecting Android applications is long, long overdue,” Krey concluded. About Promon Traditional security systems such as antivirus, antispam and antimalware are outdated and no longer able to protect companies and users against security threats and
security software. There’s never been so much crime in corporate world, which is why we are all obliged to secure our data and closely follow digital security trends. In digital world prevention is always better than cure, since it saves company funds and relieves its employees from pressure. About Nate Vickery Nate Vickery is business consultant and editor-in-chief at . He is mostly engaged in finding best IT solutions for small business. Lately he have been occupied with researching cyber security and big data trends. You can follow Nate on Twitter at @NateMVickery.

endpoint security definition download     endpoint security benefits

TAGS

CATEGORIES