Welcome to the Frontpage Since 1981, Computer Technology Review has been an authoritative source on data storage and network technologies. Today, we cover emerging technology and solutions in ediscovery (or e-discovery), compliance, virtualization, data security, backup, and disaster recovery. http://www.wwpi.com/index.php?option=com_content&view=frontpage Fri, 27 Feb 2015 07:16:31 +0000 Joomla! 1.5 - Open Source Content Management en-gb Buried by the Cloud: 7 Signs Your Business Has Too Many Apps http://www.wwpi.com/index.php?option=com_content&view=article&id=17888:buried-by-the-cloud-7-signs-your-business-has-too-many-apps-&catid=317:ctr-exclusives&Itemid=2701734 http://www.wwpi.com/index.php?option=com_content&view=article&id=17888:buried-by-the-cloud-7-signs-your-business-has-too-many-apps-&catid=317:ctr-exclusives&Itemid=2701734 altby Michael Gold

I’m obsessed with cloud apps. But it’s not any one app that interests me. Nor is it any particular category of apps. With thousands of SMB-focused apps in the world, I’m obsessed with the one point where all these apps potentially intersect: your business.

In its 2014 SMB Cloud Landscape Report, Intermedia and Osterman Research found that the average business now runs over 14 apps (and growing). This typically includes email, phones, file sharing, CRM, email marketing software, finance tools, social media accounts, and others.

The overhead required to manage any one of these apps isn’t terribly significant. But when you add 14 apps together, your overhead increases exponentially.

]]>
kim_borg@wwpi.com (Kim Borg) frontpage Fri, 27 Feb 2015 01:04:18 +0000
NetApp offers data backup, archive in the cloud, while providing a secure path for data from on-premises to AWS http://www.wwpi.com/index.php?option=com_content&view=article&id=17887:netapp-offers-data-backup-archive-in-the-cloud-while-providing-a-secure-path-for-data-from-on-premises-to-aws&catid=329:breaking-news&Itemid=2701747 http://www.wwpi.com/index.php?option=com_content&view=article&id=17887:netapp-offers-data-backup-archive-in-the-cloud-while-providing-a-secure-path-for-data-from-on-premises-to-aws&catid=329:breaking-news&Itemid=2701747 NetApp 

NetApp added new software and solutions for hybrid IT deployments that improve data backup and recovery times, and give users more control over how, when and where they store their data throughout its lifecycle.

Customers can leverage the flexibility of Amazon Web Services (AWS) to address their backup, recovery and archive challenges. NetApp is also adding support for Amazon Simple Storage Service (Amazon S3) as a storage tier to StorageGRID Webscale for long-term archives. The company also released updates to its OnCommand Cloud Manager, OnCommand Insight software and Cloud ONTAP software subscription, enabling new abilities to speed business innovation and IT responsiveness.

NetApp also unveiled three new models of SteelStore cloud-native backup solution as an Amazon Machine Image (AMI) to offer a secure approach to back up cloud-based workload. Customers can also choose on-premises SteelStore physical appliances for seamless, secure data protection to the cloud.

StorageGRID Webscale 10.1 comes with the tools to securely store data in the right place, at the right time. Fully supported by a dynamic policy engine, StorageGRID Webscale leverages Amazon S3 to store data on AWS. Geo-distributed erasure coding reduces on-premises costs and increases security. The StorageGRID Webscale object storage solution is available in appliance and software delivery models, and both types can be mixed in the same grid.

NetApp also added three new models of its SteelStore cloud-native backup solution as Amazon Machine Images (AMIs) to offer an efficient and secure approach to backing up cloud-based workloads. Customers can also choose on-premises SteelStore physical appliances for seamless, secure data protection in the cloud.

NetApp OnCommand Insight 7.1 software delivers Storage Resource Management (SRM) for hybrid environments. Clients can ensure service level agreements are met through performance monitoring, capacity management, identifying reclamation opportunities and greater awareness of IT costs. The new software includes enhanced features for brokering and monitoring hybrid storage deployments as well as innovations that reduce storage OPEX and CAPEX costs and improve capacity planning.

NetApp also boosted usability through advanced data visualization features such as new widgets, dashboards, and additional data points.

Customers can now utilize Cloud Manager software and Cloud ONTAP software subscription to manage NetApp’s customer data replication to the cloud with a single application. In addition, customers can now simplify deployment of Cloud ONTAP software subscription and OnCommand Cloud Manager and automate NetApp service and support registration.

New versions of OnCommand Cloud Manager and Cloud ONTAP software subscription and NetApp’s three new models of SteelStore Amazon Machine Image (AMI) software subscription are currently available on AWS Marketplace. StorageGRID Webscale 10.1 and the StorageGRID Webscale appliance are planned to be available in April, while NetApp OnCommand Insight 7.1 software will be available next month.

NetApp’s new data lifecycle solutions allow users to control, integrate, move, secure and consistently manage it, while benefitting from the company’s investments and expertise in building enterprise-class hybrid deployments that are designed to evolve as customer needs change. More than 275 service provider partners are contributing to the development of the data fabric, enabled by NetApp technology.

The enhanced NetApp SteelStore and StorageGRID Webscale solutions provide the flexibility of Amazon S3 and Amazon Glacier for long-term archiving. Customers gain an additional secure physical location to address single-site or local-site risks as well as tape risks. This addition to the NetApp portfolio creates ideal solutions designed to help customers integrate their storage with AWS resources.

NetApp offers Private Storage for AWS to give users the benefits of cloud elasticity and savings with the performance, availability and control of dedicated enterprise storage. NetApp Cloud ONTAP software subscription, an instance of NetApp’s storage OS running as a software subscription on AWS Marketplace, offers a cost-effective option for cloud bursting and disaster recovery.

]]>
ribeiro.anna@gmail.com (Anna Ribeiro) frontpage Thu, 26 Feb 2015 13:01:43 +0000
Dell XC Series expands to provide users range of hyper-converged solutions for virtualized workloads http://www.wwpi.com/index.php?option=com_content&view=article&id=17886:dell-xc-series-expands-to-provide-users-range-of-hyper-converged-solutions-for-virtualized-workloads&catid=324:breaking-news&Itemid=2701735 http://www.wwpi.com/index.php?option=com_content&view=article&id=17886:dell-xc-series-expands-to-provide-users-range-of-hyper-converged-solutions-for-virtualized-workloads&catid=324:breaking-news&Itemid=2701735 Dell 

Dell unveiled the second wave of Dell XC Series of web-scale converged appliances to help streamline data centers and provide over 50 percent storage capacity and up to twice the rack density to support customers deploying a range of workloads, including virtual desktop infrastructures (VDI), private cloud and big data.

Dell XC line of web-scale converged appliances offer higher performance servers and additional drives options (flash and hard disk) support more demanding workloads in VDI, private cloud and big data initiatives. It also doubles density to 16 terabytes per rack unit, supports the same amount of data in half the rack space, benefiting all types of customers, and especially managed service providers and those in co-located data centers.

The Dell XC Series, version 2.0, will be available in North America, South America, Europe, Middle-East and Africa on March 3, and elsewhere worldwide later this March.

Dell XC Series integrate the company’s proven x86 server platform and Nutanix web-scale software to provide enterprise-class, hyper-converged appliances for virtualized environments. Backed by Dell’s Global Service and Support organization, these 1U and 2U appliances consolidate compute and storage into a single platform enabling application and virtualization teams to quickly and simply deploy new workloads. This solution enables data center capacity and performance to be expanded — one node at a time — delivering linear and predictable scale-out expansion with pay-as-you-grow flexibility.

XC Series appliances include advanced software technologies that power web-scale and cloud infrastructures such as Amazon, Google and Facebook, but are engineered for all enterprises, regardless of size. It offers a hyper-converged offering to seamlessly integrates server and storage resources in a self-healing system, delivers all services through software using proven Dell hardware, enables data, meta data and operations are distributed across the entire cluster, while increasing performance linearly by adding capacity one node at a time, and providing automation and analytics to offer system-wide monitoring.

XC Series appliances simplify the deployment of virtual machines in any environment. The Nutanix Distributed File System (NDFS) runs in a Controller VM (CVM) on each node, aggregating direct-attached storage resources (hard disk drives and flash storage) across all nodes. This pooled storage is made available to all hosts through a fault-tolerant architecture. With the ability to run VMs out of the box, XC Series appliances deliver an easy, modular approach to building modern data centers.

XC Series appliances are ideal for workloads running in virtual environments. Preconfigured appliance options with flexible ratios of compute and storage capacity coupled with support for VMware ESXi and Microsoft Hyper-V, make them ideal for running different workloads in a unified Dell XC cluster. They can be integrated into any data center in less than 30 minutes, and can support multiple virtualized, business-critical workloads including VDI, private cloud, database, OLTP and data warehouse as well as virtualized big data deployments.

IT and storage administrators no longer have to manage LUNs, volumes or RAID groups. Instead, they can manage their virtual environments at a VM level using policies based on the needs of each workload.

The Nutanix Prism management framework provides a highly intuitive, easy-to-use graphical user interface (GUI). All information is organized and presented through elegant touch points to facilitate easy consumption of operational data. Prism provides the ability to define and manage a complete hyper-converged infrastructure from nearly any device and includes REST APIs for integration with third-party cloud management systems.

Prism Central gives administrators a bird’s eye view of resources across multiple clusters running different hypervisors and enables them to manage individual clusters using the GUI or a Windows PowerShell command-line interface. The GUI simplifies configuration and management of replication, DR and compression policies, which can be applied to individual VMs. Compute and storage scaling and maintenance are automated through a simple, one-click add-node feature, auto-discovery protocols, and a non-disruptive, one-click upgrade of the Nutanix CVM and host hypervisor.

Cluster Health provides comprehensive monitoring of VMs, nodes and disks in the cluster. It proactively flags potential issues in the hyper-converged infrastructure stack and provides the ability to visually navigate issues by grouping and filtering resources at VM, host and disk levels.

]]>
ribeiro.anna@gmail.com (Anna Ribeiro) frontpage Wed, 25 Feb 2015 16:03:51 +0000
Scale Computing updates HC3 HyperCore 6.0 platform with integrated disaster recovery, streamlines UI workflows http://www.wwpi.com/index.php?option=com_content&view=article&id=17885:scale-computing-updates-hc3-hypercore-60-platform-with-integrated-disaster-recovery-streamlines-ui-workflows-&catid=320:breaking-news&Itemid=2701739 http://www.wwpi.com/index.php?option=com_content&view=article&id=17885:scale-computing-updates-hc3-hypercore-60-platform-with-integrated-disaster-recovery-streamlines-ui-workflows-&catid=320:breaking-news&Itemid=2701739 Scale Computing 

Scale Computing added built-in virtual machine level remote replication and debuted a new, streamlined user interface to make its HC3 platform ideal for small- and medium-sized businesses and enterprise departments looking to overcome the barriers of implementing virtualization and fast recovery from IT disasters.

The company’s newest release expands on the feature set first introduced in HyperCore v5 to further overcome barriers of virtualization adoption at smaller companies or departmental organizations. Scale Computing’s HC3 virtualization platform is a complete ‘data center in a box’ with server, storage and virtualization integrated into a single appliance to deliver simplicity, availability and scalability at a fraction of the cost of similar solutions.

Scale Computing’s 6.0 release is available with HC1000, HC2000 and HC4000 purchases or as an upgrade to existing installations.

Instead of considering storage, servers, virtualization and management as different data center silos, HC3 products combines them in a comprehensive system and automates overall management. This allows IT to focus on managing applications, not infrastructure. With no virtualization software to license and no external storage to buy, HC3 products lower out-of-pocket costs and radically simplify the infrastructure needed to keep applications running. HC3 products make the deployment and management of a highly available and scalable infrastructure as easy to manage as a single server.

As data requirements at organizations of every size continue to require more storage assets and additional compute power, virtualization becomes a more-attractive option. For IT staffs at smaller organizations, the introduction of a virtualization layer can add complexity and management issues beyond what they are often prepared to handle. 

Addressing disaster recovery is often out of reach due to increased cost and complexity of point solutions. Scale Computing’s HC3 products give users server hardware, virtualization software and storage capabilities needed in a hyperconverged platform that can be managed from a single, unified interface to streamline data center management, and comes with the same simplicity to extend to remote disaster recovery.

Scale Computing’s latest release features built-in remote disaster recovery to allow users to set up continuous replication on a VM by VM basis between two HC3 clusters with space-efficient snapshot technology replicating to a secondary site, tracking only the blocks unique to each manual or automatic snapshot and sending the changed blocks. Testing a DR infrastructure plan is now as simple as cloning a snapshot on the target cluster and starting a VM with no disruption to ongoing replication.

In case of disaster, users can simply “clone” a snapshot on the target cluster for the manual failover of a VM that is immediately bootable or simply replicate changed data at a DR site back to the primary site for simple failback. This capability is built in at no additional charge and is available as a non-disruptive “rolling upgrade” for users of HC3 HyperCore version 5.

The HC3 platform is an all-in-one appliance which can be installed in less than an hour, used to deploy new VMs in minutes, and comes with a built-in browser-based management system. It comes fully integrated, cluster-wide resource utilization and alerts, without storage provisioning or management (LUNs, targets, data stores) abilities. Its single vendor support experience allows no disparate systems to integrate.

Scale Computing’s HC3 and the HyperCore architecture were designed to provide highly available, scalable compute and storage services, while maintaining operational simplicity through software automation and architecture simplification. HyperCore puts intelligence and automation in the software layer and was designed to take advantage of low cost, replaceable and upgradable “commodity” hardware components including the virtualization capabilities built into modern CPU architectures.

By clustering these components together into a single unified and redundant system, these attributes combine to create a flexible and complete “data center in a box” that operates as a redundant and elastic “private cloud” with additional nodes being automatically “incorporated” into the cluster and failed hardware being expected, and replaced with minimal effort or disruption.

The new user interface deployed in Scale Computing’s version 6 upgrade features an intuitive design with almost no learning curve that allows administrators to employ a “set it and forget it” mentality where they only need to periodically log in to make changes to the system. The intelligence of Scale Computing’s patented HyperCore software handles the heavy lifting of VM failover and data redundancy. Pop-up notifications display in-process user actions, alerts and processes to present users with relevant information about active events on the system.

Scale Computing’s built-in browser-based management console streamlines workflows for administrators with a 60 percent reduction in clicks during the VM creation workflow and quicker access to VM consoles directly from its Heads-Up Display (HUD). Users can now combine VMs into logical groups via tagging and set multiple tags for easy filtering through spotlight search functionality that matches names and descriptions for quick access in larger environments. Snapshot, cloning and replication functionality are now integrated into the card view of each VM for easy administration.

Last October, Scale Computing released its patented Ultra-Easy HyperCore Software system, which enlarged capabilities of its HC3 family to provide greater scalability and storage computing power. HyperCore integrates storage, servers and virtualization software into an all-in-one appliance based system that is scalable, self-healing and as easy to manage as a single server.

HyperCore continuously monitors all virtual machines, software and hardware components to detect and automatically respond to common infrastructure events, maintain application availability and simplify data center management.

HyperCore v5 comes with non-disruptive / rolling updates so that workloads are automatically live-migrated across the HC3 appliance to allow for node upgrades, even if such an upgrade requires a node-reboot. Workloads are then returned to the node after the upgrade is complete. It also features near-instant VM-level snapshots, Scale Computing’s proprietary Allocate-on-Write technology allows for no disruption at the time of the snapshot, no duplication of data, and no performance degradation even with thousands of snapshots per VM.

HyperCore was optimized to support new HC3 nodes with twice the virtual machine density by doubling I/O performance, CPU and memory resources per node. All HC3 systems are highly available and include built-in unified management. The result is a data center solution designed to help mid-to-large size companies seamlessly scale their infrastructure simply and cost-effectively.

]]>
ribeiro.anna@gmail.com (Anna Ribeiro) frontpage Wed, 25 Feb 2015 13:11:43 +0000
Imation targets media, entertainment and data protection workloads with Nexsan NST2000 http://www.wwpi.com/index.php?option=com_content&view=article&id=17884:imation-targets-media-entertainment-and-data-protection-workloads-with-nexsan-nst2000-&catid=320:breaking-news&Itemid=2701739 http://www.wwpi.com/index.php?option=com_content&view=article&id=17884:imation-targets-media-entertainment-and-data-protection-workloads-with-nexsan-nst2000-&catid=320:breaking-news&Itemid=2701739 Imation 

Imation introduced Tuesday Nexsan NST2000, its ultra-efficient hybrid storage appliance purpose-built for media and entertainment and data protection workloads that will deliver Fibre Channel connectivity, apart from providing hybrid storage to mid-sized organizations

NST2000 expands the Nexsan NST family’s unified block and file sharing capabilities by adding Fibre Channel connectivity for those small and medium-sized organizations that prefer it for their applications. This gives organizations the ability to take advantage of the benefits of NST’s hybrid storage using a full range of unified storage protocols, including iSCSI, NFS, CIFS, FTP and now Fibre Channel.

For applications with stringent workload requirements like cloud computing, server virtualization, desktop virtualization (VDI) and databases, the NST2000 delivers performance to ensure application demands never outpace available I/O again. Applications will have never performed faster on a system operating at the economics of spinning disk storage.

“With the introduction of the NST2000, organizations have an entirely new set of options for storing, managing and protecting the high-value data they create in situations like digital asset management in media and entertainment workflows,” said Mike Stolz, vice president of marketing and technical support for Imation’s Nexsan solutions. “They can deploy hybrid storage, realize new levels of performance, scale their data centers to new levels of capacity and much more – all at a price point that was previously inaccessible to them given their limited budgets and resources. We’re excited to help these organizations operate more efficiently and generate more ROI from their storage investment – and to add the NST2000 to help address a full range of application requirements.”

The NST2000 comes fully featured with snapshots, replication, thin provisioning, replication and compression, a GUI (graphical user interface) and scriptable CLI streamline setup and management for the time-constrained IT administrator. As with all Imation storage, the Nexsan NST2000 no single point-of-failure architecture boosts reliability, so that enterprises are offered the best performance and functionality without the enterprise-class price.

NST2000 storage systems utilize SSD, NL-SAS or SAS drives; two redundant, high performance, multi-core Xeon-based storage controllers; high speed I/O subsystems and a fully redundant architecture. All active components are hot-swappable, including power supplies, disks and controllers.

FASTier read and write cache complements 96GB DRAM to speed up IOPS and throughput. The NST2000 features 16 Xeon CPU cores, up to 168TB of capacity and up to 2TB of SSD in FASTier cache. FASTier flexible hybrid caching provides high performance where needed, while NestOS software optimizes the hybrid storage architecture and resources.

The NST2000 provides CIFS and NFS shared folders as well as fibre channel or iSCSI volumes. Snapshots do not require the pre-reservation of storage capacity, and they may be scheduled and managed from the management GUI or initiated from Windows VSS requestors.

Individual shares, LUNs, or entire storage pools may be replicated asynchronously to a second NST2000 storage system, with snapshots intact for use on the target side for backups, testing or data mining. Active Directory integration enables user identities and access rights on the NST2000 shares, while CHAP, iSNS and LUN masking protect iSCSI traffic. Quotas limit storage consumption by share, and oversubscription is permitted for thin provisioning storage, along with alarms which notify when additional storage is needed. Capacity can be expanded by adding additional storage to a running system, so future needs can be met without incurring downtime. Besides, link aggregation combines Ethernet ports for faster throughput.

"The new NST2000 is just the latest proof point that Imation has raised the bar with its Nexsan product portfolio," said Deni Connor, founder of SSG-Now. "Now with its NST2000, mid-sized organizations can take full advantage of the benefits this product provides."

]]>
ribeiro.anna@gmail.com (Anna Ribeiro) frontpage Tue, 24 Feb 2015 15:44:58 +0000
Druva rolls out cloud data privacy framework to meet global privacy concerns http://www.wwpi.com/index.php?option=com_content&view=article&id=17883:druva-rolls-out-cloud-data-privacy-framework-to-meet-global-privacy-concerns&catid=329:breaking-news&Itemid=2701747 http://www.wwpi.com/index.php?option=com_content&view=article&id=17883:druva-rolls-out-cloud-data-privacy-framework-to-meet-global-privacy-concerns&catid=329:breaking-news&Itemid=2701747 Druva 

Druva rolled out Tuesday a set of new capabilities consisting of an inclusive data privacy framework to enable businesses to meet growing global privacy demands. The new framework takes into account growing global concern by federal governments around the world to ensure that personal information of citizens is maintained in view of new data protection regulations.

Built on Druva's cloud security foundation, the new framework addresses often-neglected concerns about corporate and employee data misuse and emerging legal data requirements.

Druva centralizes and controls business data residing on employees' desktops, laptops, tablets and smartphones via integrated endpoint backup, data loss prevention, IT-managed file sharing, and data governance controls. Druva continually mirrors end-user data, which enables rapid data recovery for lost or stolen devices, allows remote user access to any file or folder from any device, and supports eDiscovery, compliance and forensics needs.

The components of Druva's data privacy framework are designed to protect organizations from unauthorized data access, thwart misuse of employee data by authorized users, and ensure data integrity regarding legal or compliance initiatives. It offers support for 11 global admin selectable regions that are policy-configured to ensure data is stored to meet DPA requirements, including the newest region in Germany.

Druva’s approach to storing unique block data separated from metadata, along with its envelope key encryption model, provides users with high level of data-scrambling and obfuscation ensuring cloud data privacy -- no third party, not even Druva under court order, can provide access to data. It offers regional end-to-end data management enables global organizations to meet local privacy laws while maintaining a single system of record for corporate governance.

With the new data privacy framework, corporate users can allow enterprise to identify officers who may handle sensitive materials in order to prevent their data from being visible to anyone else in the organization. It also ensures that all data access and file sharing activity is tracked with tamper-proof audit logs so that data privacy violations and interference with data integrity can be identified for forensics, regulatory, eDiscovery and compliance investigations.

The data privacy framework can be catered to meet employee based requirements based on regional requirements so that end-users can be set private by default or flag their personal data to ensure administrators do not have visibility into their data. With Druva’s mobile device containerization and exclusionary backup controls end-user personal data can be maintained separate from corporate data on both BYOD and COPE devices.

The framework can also be adapted to provide scenario-based privacy so that administrative flexibility enables organization to address specific needs around compliance and litigation. For example, an explicitly defined legal administrator can override privacy controls to enforce data governance.

The new privacy capabilities include geo-defined governance and administration features that ensure data privacy. Druva customers can also delegate storage and data administration rights to regional personnel, enabling global organizations to meet varied regional data privacy requirements within a single cloud solution. This geo-specific capability is critical for global organizations such as those with operations in Germany, whose data protection act mandates stringent employee data regulations, including a ban on data storage outside the country.

These features complement Druva's use of Amazon Web Services, which recently opened its German region and supports data centers worldwide, as the underlying inSync cloud infrastructure. Druva now supports over 11 regions, which include Germany, GovCloud, Japan and Australia.

With the growing increase of data privacy concern across the world, countries like Germany, France, Russia and Singapore have taken steps to ensure the privacy of their citizens' personal information by adopting new data protection regulations. This, along with existing regulations such as HIPAA and FINRA in the United States has had a sweeping impact on global corporations.  These businesses must now adapt their IT infrastructure to support the varied regional requirements or face potential sanctions and/or legal repercussions.

]]>
ribeiro.anna@gmail.com (Anna Ribeiro) frontpage Tue, 24 Feb 2015 14:44:46 +0000
IBM Cloud will extend user control, visibility, security to public cloud http://www.wwpi.com/index.php?option=com_content&view=article&id=17882:ibm-cloud-will-extend-user-control-visibility-security-to-public-cloud&catid=323:breaking-news&Itemid=2701759 http://www.wwpi.com/index.php?option=com_content&view=article&id=17882:ibm-cloud-will-extend-user-control-visibility-security-to-public-cloud&catid=323:breaking-news&Itemid=2701759 IBM 

IBM Corp. announced Monday new hybrid cloud technology and investments that will tackle the biggest challenges enterprises face as they adopt cloud and integrate existing applications, data and services across a multitude of traditional systems and clouds.

IBM will offer a range of technologies and services that will extend clients’ control, visibility, security and governance in a hybrid cloud environment similar to what clients have in their private cloud and traditional IT systems. In doing so, IBM will provide increased data portability across environments and make it easier for developers to work across cloud and non-cloud environments. 

IBM is using the capabilities of more than half its cloud development team on hybrid cloud innovations, including hundreds of developers working on open cloud standards. More than 65 percent of enterprise IT organizations will commit to hybrid cloud technologies before 2016, pushing the rate and pace of change in IT enterprises. Digitization is accelerating the ongoing evolution of business. Clouds - public, private, and hybrid - enable companies to extend their existing infrastructure and integrate across systems.

IBM Cloud provides the security, control and visibility they have come to expect and provides the flexibility to run critical applications and processes in an environment that mirrors existing controls. Through the SoftLayer infrastructure combined with the new services IBM is announcing, clients with now have the right tools and environment to combine all of their data no matter where it resides to respond to changing market dynamics.

According to IDC, 80 percent of new cloud applications are predicted to be big-data intensive and much of it born on the cloud, brought on by the convergence of mobile applications, e-commerce transactions and other Web applications, companies are struggling to gain value in the data being generated in digital revolution. As a result, businesses are increasingly struggling with incorporating, managing and gaining insights into processes and data.

By surfacing new services through composable API-based services in Bluemix, IBM is helping create a hybrid cloud environment that provides clients with the tools they require to extend their business to the cloud. The new portability services and open standards to move enterprise workloads across environments to bring the app closer to the data or the data closer to the app.

IBM Enterprise Containers allows clients to build and deliver applications by extending native Linux containers with Docker APIs to provide enterprise –class visibility, control and security as well as an added level of automation. Solutions developed in a cloud environment could be brought to on-premises systems for execution, allowing many of the benefits of cloud computing to be realized for data that cannot be moved to cloud for processing for reasons of data sensitivity, size or performance.

IBM DataWorks provides a new, intuitive tooling and experience to find, refine, enrich and deliver trusted data, which allows developers to subdivide and manipulate data sets from the treasure trove of public and private data. It offers enhanced visibility and control of clients’ hybrid environments with a single, end-to-end view, comes available as a service enabling management across hybrid environments that is the largest federated orchestration library, along with new security features that protect vital data and applications using analytics across the enterprise, public and private clouds and mobile devices.

New services that enable developers to securely connect apps, data and services across an open and flexible environment of traditional systems, cloud platforms and any device, seamlessly weaving data and services with APIs to compose new apps and services, including Secure Passport Gateway which enables self-service to developers to securely connect data and services to Bluemix in minutes through a simple Passport service that keeps IT in control, API Harmony which discovers an ideal API match for a client’s application, and  Bluemix Local which extends Bluemix into a company’s data center with borderless visibility and management across Bluemix environments (public, dedicated, and local).

IBM also debuted Watson Zone, a new resource center on Bluemix that brings together Watson APIs, sample code, training resources and use cases to help inspire and guide users to build a new class of hybrid cloud applications infused with cognitive computing capabilities.

IBM also announced general availability of the Watson Personality Insights service to let developers integrate capabilities to analyze trends and patterns in diverse, high volume social media and other public data streams. The commercial launch follows the addition of five new betas Watson services to Bluemix including Speech to Text, Text to Speech, Visual Recognition, Concept Insights and Tradeoff Analytics. These services are available free of charge to help developers explore potential use cases.

To make much of this possible, many of these new services rely on open technologies. As a top contributor to a number of open foundations including Cloud Foundry and OpenStack, IBM has dedicated hundreds developers to advancing open technologies for the hybrid cloud market. This has led to using OpenStack services to provide companies with the means to deploy and manage cloud workloads, package them in Linux based Docker containers that ensure an open, extensible approach to portability.

]]>
ribeiro.anna@gmail.com (Anna Ribeiro) frontpage Tue, 24 Feb 2015 12:40:25 +0000
Apcera debuts hybrid cloud operating system platform across on-premise, public environments http://www.wwpi.com/index.php?option=com_content&view=article&id=17881:apcera-debuts-hybrid-cloud-operating-system-platform-across-on-premise-public-environments&catid=324:breaking-news&Itemid=2701735 http://www.wwpi.com/index.php?option=com_content&view=article&id=17881:apcera-debuts-hybrid-cloud-operating-system-platform-across-on-premise-public-environments&catid=324:breaking-news&Itemid=2701735 Apcera 

Apcera announced Monday that it will showcase its hybrid cloud operating system (HCOS) with Ericsson at next week’s Mobile World Congress in Barcelona, Spain.

The HCOS is built on Apcera's Continuum offering simplifies and speeds hybrid cloud deployment and management, by extending policy across all cloud environments and enabling applications to be easily and automatically shared, moved and governed from a single management platform. The platform also supports the use of multiple cloud providers, such as Amazon Web Service (AWS), Google Compute Engine (GCE) and SoftLayer, with plans to add Azure as well. Customers get immediate access to all cloud vendors in the Continuum ecosystem.

Continuum is a policy-driven platform for Dev, DevOps, Ops and IT managers to deploy diverse workloads, orchestrate them as systems, and govern them on premise and in the cloud. With a built-in and pervasive policy core, Continuum makes governance seamless by transparently injecting and instantly enforcing policies where needed.

HCOS eliminates current patchwork approach by managing both on premise and public cloud resources, giving enterprises a clear visibility into how all their IT resources are operating and allow them to control resources from a single location. With centralized management, the HCOS allows enterprises apply policy consistently and ensure that changes to apps or moves to other resources don't compromise security.

Set to be available in the third quarter of this year, the HCOS platform enables users to operate any kind of workload, whether it's an application, service, operating system or Docker container, and allows users to use whatever resource is best suited for each component and then enables clients to compose them together into a cohesive application. As policy is enforced across the entire infrastructure, enterprises do not need to rely on multiple firewalls placed at arbitrary borders to ensure the environments and workloads can communicate securely.

The HCOS clears the way for developers to innovate without being slowed down by typical IT concerns about security and interoperability thanks to policy being applied all the way down to the development environments. By giving visibility into how the resources are performing, an HCOS allows enterprises to anticipate issues, move workloads to other resources and scale quickly, easily and automatically as demand requires.

An HCOS is designed to eliminate the challenges of security, control and complexity with which enterprises are currently struggling, and speed innovation and time to market. With a HCOS, enterprises can rest assured that their hybrid cloud resources will be easy to deploy and manage, while maintaining policy and the highest governance standards. Most important, with a HCOS, enterprises will be able to achieve the return on investment that they expect from their hybrid cloud strategy.

Though currently deploying with select global IT enterprises, the Apcera HCOS platform is applicable for telecom service providers as well. It enables telecom service providers to manage the migration from legacy infrastructure to network functions virtualization (NFV) and software-defined networks (SDNs), and allows them to attain quicker time-to-market for real-time voice, video and messaging apps and other critical deployments, without compromising on security requirements.

“Much like a computer’s operating system ties together all the applications we’re used to using on our laptop, the HCOS manages a workload’s access to the compute resources it needs, and not just on one server, but across a cluster of them both on premise and in public clouds,” wrote Derek Collison, an Apcera executive in a blog post.

"Our HCOS platform allows freedom of choice of all vendors in the Continuum ecosystem and enables users to securely and transparently migrate workloads -- whether they're apps, services, operating systems or Docker containers -- to any private or public cloud,” said Derek Collison, founder and CEO of Apcera.

As every cloud vendor and equipment manufacturer has its own tools and processes, organizations are unable to seamlessly and automatically apply policy and business logic, or manage resources across all platforms. So, instead of simplifying their IT with the hybrid cloud, enterprises are actually adding new levels of complexity, employee headcount and additional costs.

By extending Continuum into multiple public clouds, such as AWS, GCE and SoftLayer, Apcera has developed an operating system for the hybrid cloud that manages applications' access to the compute resources they needs, and not just on one computer, but across a cluster of them both on premise and in public clouds.

The HCOS gives enterprises improved control over resources, automates the execution of programs and turning up of resources, provides more visibility into how the resources interact and enables the application of security and policy consistently across the entire environment.

]]>
ribeiro.anna@gmail.com (Anna Ribeiro) frontpage Mon, 23 Feb 2015 15:42:35 +0000
Pivotal boosts SQL on Hadoop, analytical database, NoSQL in-memory technology with Hortonworks alliance http://www.wwpi.com/index.php?option=com_content&view=article&id=17880:pivotal-boosts-sql-on-hadoop-analytical-database-nosql-in-memory-technology-with-hortonworks-alliance&catid=324:breaking-news&Itemid=2701735 http://www.wwpi.com/index.php?option=com_content&view=article&id=17880:pivotal-boosts-sql-on-hadoop-analytical-database-nosql-in-memory-technology-with-hortonworks-alliance&catid=324:breaking-news&Itemid=2701735 Pivotal 

Following the announcement of the Open Data Platform Initiative (ODP) earlier this week, Pivotal and Hortonworks have come out with a unified approach to meeting enterprise data management and analytics needs through a commercial alliance to focus their efforts around a consistent core of Apache Hadoop-based capabilities, including product integrations, joint engineering and production support.

This arrangement marks a concerted strategy by both companies to leverage core competencies to maximize the benefits to their enterprise customers and partners. The alliance includes Pivotal’s SQL on Hadoop, analytical database and NoSQL in-memory technologies, with Hortonworks’ support for Hadoop.

As part of the alliance, Pivotal and Hortonworks will align engineering teams to accelerate the enterprise capabilities of Apache Hadoop with Pivotal technologies, including HAWQ and Pivotal GemFire. Pivotal and Hortonworks will provide all available advanced services in the Pivotal Big Data Suite on the Hortonworks Data Platform (HDP). Beginning with SQL on Hadoop solution, HAWQ will deliver ideal SQL on Hadoop to Hortonworks customers for advanced analytics. Hortonworks will provide escalation level support for Pivotal HD 3, Pivotal’s Apache Hadoop Distribution.

The Open Data Platform will promote big data technologies based on open source software from the Apache Hadoop ecosystem and optimize testing among and across the ecosystem’s vendors. These efforts will accelerate the ability of enterprises to build or implement data-driven applications. The alliance will include a range of vendors including GE, Hortonworks, IBM, Infosys, Pivotal, SAS, AltiScale, Capgemini, CenturyLink, EMC, Splunk, Verizon Enterprise Solutions, Teradata, and VMware.

The ODP will work directly with specific Apache projects, adhering to the Apache Software Foundation (ASF) procedures for the contribution of ideas and code. A key benefit of the ODP will be for members to collaborate across various Apache projects as well as other open source-licensed big data projects with a goal toward meeting enterprise class requirements.

The ODP is likely to encourage a set of standard open source technologies and versions that will increase compatibility among big data solutions and simplify the process for applications and tools to integrate with and run on any compliant system.

Companies in the Open Data Platform initiative will concentrate their efforts first on developing and using offerings focused on core Apache Hadoop use cases. The Open Data Platform will provide access to a tested reference core of Apache Hadoop, Apache Ambari and related Apache source artifacts, which will simplify upstream and downstream qualification efforts to provide a sought-after “test once, use everywhere” core platform.

This week, Pivotal also released a new version of its big data offering Big Data Suite built to help customers accelerate the value they get from their big data. The company will open source the core components of Pivotal Big Data Suite, which include analytical MPP data warehouse Pivotal Greenplum Database, an advanced enterprise SQL on Hadoop analytic engine, Pivotal HAWQ; and the NoSQL in-memory database, Pivotal GemFire – will be, for the first time, based on an open source core.

The contributions will be an open, fully functioning core that will provide mission critical resiliency, advanced client support, performance optimizations for demanding enterprise workloads and advanced operational tools.

]]>
ribeiro.anna@gmail.com (Anna Ribeiro) frontpage Fri, 20 Feb 2015 15:35:59 +0000
Microsoft unveils public preview of Azure HDInsight on Linux to leverage existing skillsets and tooling http://www.wwpi.com/index.php?option=com_content&view=article&id=17879:microsoft-unveils-public-preview-of-azure-hdinsight-on-linux-to-leverage-existing-skillsets-and-tooling&catid=320:breaking-news&Itemid=2701739 http://www.wwpi.com/index.php?option=com_content&view=article&id=17879:microsoft-unveils-public-preview-of-azure-hdinsight-on-linux-to-leverage-existing-skillsets-and-tooling&catid=320:breaking-news&Itemid=2701739 Microsoft 

Microsoft announced this week its Microsoft Azure managed service to be available on Linux. Apart from this, the giant updated the service with features such as the general availability of Apache Storm to continue making Hadoop simpler and easier to use.

Apache Storm on HDInsight is a managed cluster integrated into the Azure environment. It is comes as a managed service with an SLA of 99.9 percent up time, builds analytic pipelines using the language of choice. HDInsight provides support for Storm components written in Java, C# and Python, Storm topologies support a mix of programming languages - read data using Java, then process it using C#. It also uses the Trident Java interface to create Storm topologies that support exactly once processing of messages, "transactional" datastore persistence and a set of common stream analytics operations.

The solution can be in scale with HDInsight cluster with no impact to running Storm topologies. It integrates with other Azure services - Event Hub, Azure Virtual Network, SQL Database, Blob Storage and DocumentDB, while combining the capabilities of multiple HDInsight clusters using Azure Virtual Network to create analytic pipelines that use HDInsight HBase or Hadoop clusters.

Azure HDInsight is Microsoft’s Hadoop-as-a-service offering built on top of the Hortonworks Data Platform (HDP) and architected for the Azure cloud. HDInsight comes with the ability to process large volumes of data whether it is relational or non-relational in near real-time, allowing anyone to spin up Hadoop on any number of nodes with just a few clicks and within minutes. As users can spin up clusters on-demand, they need not have to do hardware maintenance, tuning, patching, upgrades and capacity planning. Backed by Azure’s 99.9 percent reliability guarantee, Microsoft will monitor clusters 24×7.

Users have a choice to select either Windows or Linux when deploying HDInsight from the Azure portal. Both options are first class citizens, offering simple deployment, SLA, technical support for the entire stack, ranging from Hadoop to the operating system.

For customers with prior experience deploying Hadoop in Linux on-premise, they will be able to leverage their existing skillsets and tooling (documentation, samples, and templates) to work with HDInsight. Familiar tools like SSH and Ambari are now available to use.

SSH connectivity to HDInsight clusters is enabled by default for all HDInsight on Linux clusters. Users can use an SSH client of their choice to connect to the cluster. Additionally, SSH tunneling can be leveraged for forwarding traffic from the browser to all of the Hadoop web applications.

Ambari is a project for the management and monitoring of Big Data clusters, offering a single view of the performance and state of Hadoop cluster, as well as providing the ability to customize configuration setting, and deploy additional Hadoop apps onto the cluster. Ambari provides monitoring and alerting within the HDInsight cluster, configurable from a web app hosted on the cluster.

Microsoft is committed to openness, “Microsoft loves Linux”, open sourcing the .NET Core, with its contributions to Apache Hadoop, and support for Docker containers. This approach to openness made it a natural decision to give customers the choice of running their big data Hadoop workloads on both Windows and Linux.

With HDInsight growing 20 percent every month and being one of the fastest growing services in Azure, Microsoft had a steady stream of requests to support Linux customers can take advantage of productivity benefits of Hadoop-as-a service. Responding to this demand, customers can now choose Windows or Linux operating systems when they deploy Hadoop in Azure. Both options are first class citizens, offering simple deployment, SLA, technical support for the entire stack, ranging from Hadoop to the operating system.

Customers can now augment their current deployments with the cloud and leverage all of their existing skillsets and tooling (documentation, samples, and templates) to do so. Within a hybrid environment, customers can mix the control and flexibility of on-premises deployments with the elasticity and redundancy of the cloud. In conjunction with Hortonworks HDP 2.2’s out-of-the-box Falcon connectors to Azure, HDInsight on Linux can make hybrid scenarios an easy transition.

Last October Microsoft announced the public preview of Apache Storm as clusters on HDInsight. Storm is an open source project in the Hadoop ecosystem which gives users access to a stream analytics platform that can reliably process millions of events in real-time. It is ideal to solve real-time challenges like fraud detection, click stream analysis, financial alerts, telemetry from connected sensors & devices (IoT), and social analytics.

With the general availability of Storm as a deployment option, users can benefit from a quick and easy way to deploy real-time analytics in few clicks and within minutes. It comes with Microsoft’s 99.9 percent service level agreement for uptime and elastic scale powered by the Azure cloud. Any amount of data can be ingested through Event Hubs or Apache Kafka and processed through Storm.

Part of making big data more accessible includes enabling developers to be productive in the environments, thereby making Storm available for both .NET and Java and the ability to develop, deploy and debug real-time applications for Storm directly in Visual Studio. Developers can even mix spouts written in other languages like Java, thereby using existing spouts and bolts as part of the topology.

In conjunction with Hortonworks availability of HDP 2.2, Microsoft is also releasing the next version of HDInsight built on Hadoop 2.6, Hive and Pig 0.14, HBase 0.98.4 and more. Teaming up with Hortonworks and the open source community, this version of HDInsight includes work done on Stinger.next to speed up Hadoop queries with the goal of achieving sub-second response times.

The first phase of Stinger.next is now in HDInsight running Hive 0.14. Pig can now process data in ORC files, and can leverage Tez as an execution engine.

To better support customers who are running increasingly large big data workloads in Azure, we’re increasing HDInsight availability on a greater number of Virtual Machines types and sizes. HDInsight can now utilize A2 to A7 sizes built for general purposes, D-Series nodes that feature solid-state drives (SSDs) and 60-percent faster processors, and A8 and A9 sizes that have InfiniBand support for fast networking.

HBase for HDInsight customers can benefit from the higher memory from the D-Series to increase performance. Storm for HDInsight customers can also benefit from higher memory for loading larger reference data and faster CPU’s for higher throughput.

HDInsight is also delivering the general availability of cluster scaling feature in Azure HDInsight enables users to change the number of nodes of a running HDInsight cluster without having to delete and recreate a new cluster. Initially, only Hadoop query and Storm will have this ability with HBase to follow shortly after.

]]>
ribeiro.anna@gmail.com (Anna Ribeiro) frontpage Fri, 20 Feb 2015 11:24:23 +0000