Quantcast
Channel: virtualization – Telecoms.com
Viewing all 84 articles
Browse latest View live

Red Hat launches its latest KVM virtualization platform

$
0
0

Red Hat has announced the general availability of Red Hat Virtualization 4, its latest Kernel-based Virtual Machine (KVM) -powered virtualization platform.

Red Hat Virtualization 4 includes both a high-performing hypervisor, as well as a web-based virtualization resource manager for management of an enterprise’s virtualization infrastructure. The team claim the platform has been built with both legacy systems and emerging technologies including containers and cloud-native applications.

“Our customers continue to rely on virtualization as a vital part of their datacenter modernization efforts while also using it to help bridge to new cloud-native and container-based workloads,” said Gunnar Hellekson, Director of Product Management for Linux and Virtualization at Red Hat. “Red Hat Virtualization provides the economics, performance and agility needed across both traditional and new infrastructure initiatives.”

The platform itself will be marketed around several new features including enhanced performance and extensibility; the update will include a smaller footprint hypervisor, as well as new management features; an advanced system dashboard has been introduced to the offering. The team has also stated the offering will include support features for Linux container-based workloads as well as OpenStack private and hybrid cloud deployments.

“While virtualization remains a key underpinning for the modern data centre, customer needs are rapidly evolving to demand more than simply virtualizing traditional workloads,” said Gary Chen, Research Manager of Software Defined Compute at IDC. “Modern virtualization platforms need to address these standard scenarios while making way for the emergence of virtualized containers and cloud computing.”


Ericsson and Red Hat share some open sauce

$
0
0

Having struggled to come to terms with Cisco’s betrayal, Ericsson has decided to have a rebound fling with Red Hat. Go girlfriend!

They’re a promiscuous lot, these tech giants, bed-hopping like a cul-de-sac overgrown with pampas grass. The latest one to throw its keys into the bowl is open software company Red Hat, which had the good fortune to make itself available just as Ericsson was looking to give Cisco a taste of its own medicine.

The overt premise for their ‘partnership’ is the reciprocal benefit derived from Ericsson’s desire to diversify in a software direction and Red Hat’s strategic move into the telco vertical. Red Hat offers the full monty of telco software buzzwords, including OpenStack, containers, NFV, SDN and SDI (software-defined infrastructure, not strategic defence initiative), while Ericsson brings telco market expertise and credibility to the table.

Telecoms.com spoke to Radhesh Balakrishnan, GM of OpenStack at Red Hat, to get all the inside gossip. “The future of the telecommunications industry is moving away from proprietary solutions and towards community-powered open software and standardized hardware,” he winked. “By engaging in an ‘upstream first’ relationship, Ericsson and Red Hat have committed to driving innovations needed for next-generation communications infrastructure via open source communities, including projects focused on software-defined infrastructure for compute, networking and storage.

“These technologies are driving extreme agility for the modern telco, opening the door for automation, scale and reuse, but they need an open source and open standards-based foundation to truly reach their potential. Our alliance with Ericsson helps assure customers that they have a technology based on common code and standards, not a ‘special snowflake’ that becomes more costly over time.”

We’re not sure exactly what an ‘upstream first’ relationship consists of, but it sounds pretty saucy. It seems likely that Red Hat will be paying for dinner more often than not as Ericsson is feeling a bit strapped for cash right now. We can only speculate about the payment in kind Ericsson might offer Red Hat in return.

NFV market forecast to grow 33% per year until 2020, driven by IoT

$
0
0

A new report has had a look at the market for network function virtualization (NFV) and reckons it’s due for a growth spurt.

The number crunching was done by Technavio, which scrutinised the NFV activities of a bunch of key players, including the major networking vendors. To get a sense of the size of the market it looked at revenue obtained from components such as NFV virtualization software and NFV IT infrastructure and services. The conclusion was that revenue associated with NFV will grow at a CAGR of 33% from now until 2020.

“The increasing adoption of internet-connected devices will provide a major boost to the global NFV market,” said the report. “A rise in the number of connected devices will lead to the generation of large blocks of data. The growing popularity of ideas such as connected car, connected home, connected health, and smart cities has led many industries such as manufacturing, utilities, retail, automotive, and social media to use IoT for increased data transfer.

“NFV can connect and manage the heterogeneous elements of IoT securely. It is adopted by telecom operators to use the power of virtualization and commercial servers, and open software to build, operate and manage these networks. It maintains the network resources by analyzing and managing the traffic flows throughout the network. Moreover, the deployment of VNFs in NFV platforms such as mobile core, DPI, routing, gateways, traffic management, and security provide the opportunity to customize network services for IoT, which also contribute to this market’s growth over the coming years.

“One of the recent trends gaining significant traction in the market is the increasing adoption of NFV by enterprises. Initially, telecom operators were the dominant adopters of NFV, but the market for NFV is expected to move towards enterprise cloud and internet service providers over the coming years. Private cloud is also one of the proposed areas for NFV deployment. For the private cloud to be more agile, network services have to be provisioned on-demand through NFV.”

OpenCloud, which to be fair has a major interest in the growth of NFV, unsurprisingly welcomed the findings. “Operators are beginning to realise the benefits of NFV, making significant investments to take advantage of the flexibility, price and performance of virtualised network functions (VNFs),” said Chris Haddock, Head of Marketing at OpenCloud.

“These software-based VNFs can be made to work together, breaking-open the fixed and closed nature of traditional hardware-based network appliances. However, to truly reap the benefits of the technology, operators need to be smart about how they choose to virtualise the various functions in their networks.

“Most operators are typically outsourcing NFV to a single network equipment provider, but doing this means following the same vendor equipment lock-in path that operators have always taken, with limited flexibility and opportunity for competitive differentiation. Locking themselves into a big, closed environment waters down the original aims and benefits of virtualisation.

“Instead, operators need to invest in a selection of best-in-class software building blocks, using a number of smaller components, to make best use of the computing resources available to them. Used in this manner, NFV can empower operators to evolve their networks and services at their own pace, ahead-of and in response to local competition.”

It’s fair to say the NFV market hasn’t grown as quickly as was initially expected, but that’s often the case with major new technological paradigms. They look great on PowerPoint but the grim reality of planning, implementing and paying for them is another matter entirely. It could be that industry is only now starting to get its head around what the point of NFV is and, as the report indicated, it could also just be that its time has come due to demand from IoT and better cloud infrastructure. Let’s see.

Telefónica and Nokia embark on a virtual route to the cloud

$
0
0

Finnish kit vendor Nokia must be virtually delighted, after Telefónica went ahead with a deployment of its virtualized router technology.

Nokia will obviously be singing the praises of what its new virtualized Provider Edge (vPE) routers can do, and why Telefónica is all over it. Supposedly, Telefónica will be looking to increase its network reach, accelerate deployment of its enterprise VPN services and extend its service offerings to new points of presence within Spain, and across other territories. It’s a very limited deployment for now, a single virtual router in the network so far, but with the scope to expand.

Ultimately the deployment of Nokia’s virtualized service router (VSR) is to extent the capabilities of fixed networks, thus the evolution of telco cloud infrastructure to improve time to market and reduce operational expenses.

Telefónica couldn’t have been more excited…

“We have deployed and integrated one virtual PE in the International Backbone of Telefónica,” said Pedro López, Telefónica’s Business Solutions B2B Customer Operations Director. “This is one of the most critical projects in the core of the network that will help us to offer enhanced network services.”

“When multinational companies want to extend their VPNs – or add capacity to those already deployed – speed, performance and reliability are all essential,” said Sri Reddy, GM for Nokia’s IP routing business. “Telefónica has taken a leadership role in virtualization, and with the certification and deployment of the Nokia VSR, they are well positioned to expand the range and reach of their VPN services and address their customers’ evolving needs.”

AT&T, Colt and Orange have a virtualization party

$
0
0

Wholesale operator Colt has claimed its position as the network for networks after doing some SDN and NFV cleverness with AT&T. Coincidentally, Orange announced the launch of an SDN project at the exact same time.

Colt apparently has a hard-on for software, using software defined networking and network functions virtualization APIs to make service provider architectures interoperable with each other. Ultimately, this means SDN-managed services can be booted up and migrated across multiple networks in near real-time; so operators the world over can start collaborating more effectively, apparently.

This would appear to fall under AT&T’s wider network transformation initiative, Domain 2.0. For Domain 2.0, AT&T basically threw a tonne of industry vendors into the ring, let them fight it out, with the lucky winner being allowed to have its way with the network. It has put NFV, SDN and Cloud at the forefront of the transformation effort, which is where Colt comes in.

“This proof of concept is a key building block giving enterprises the power to provision scalable, flexible network services on-demand. The API in our trial makes managing integrated SDNs accessible, agile, flexible, and easy to adopt,” said Rajiv Datta, Colt’s CTO.

Over the last few years, Colt has been developing an elastic transport network based on SDN tech, where it can ostensibly allocate resources, bandwidth and functions to various parts of the network. Now it is partnering with AT&T to deliver exactly that.

Orange, coincidentally, just announced that it has launched the Easy Go Network, a fully-virtualized, network functions on-demand service, fuelled by a lovely little bit of SDN. Funny timing that, innit. Especially as the french telco and AT&T have already collaborated on SDN and all things open.

“It is designed to help businesses anticipate and address their digital needs fast and within budget,” said Pierre-Louis Biaggi, vice president, Connectivity Business Unit at Orange Business Services. “We are using an open-standards based approach to develop our SDN and NFV strategy, and we are planning to launch a universal CPE for larger sites next year. Our ultimate goal is an adaptive network, which we will bring to our customers within the next three years,”

Now we’re not saying that AT&T and Colt are going to be involved in the full-scale deployment of an “adaptive network” in the next few years, we’re just saying we wouldn’t be surprised if they were.

Huawei bags starring role in Telefónica’s Unica project

$
0
0

Telefónica has announced a new partnership with Huawei which will see the Chinese giant take a prominent role in its Unica project.

As part of the new agreement, Huawei will assist Telefónica in building a large scale virtual Evolved Packet Core (vEPC) in thirteen countries spread through Latin America and Europe. Unica is one of the industry’s largest virtualization projects, aimed at virtualizing network functions in an automated way across all of Telefónica’s operations.

Using Huawei’s vEPC solution, named CloudEPC, the pair hope to adapt Telefónica ’s aging network to cope with traffic growth, as well offering greater flexibility to adapt to the changing environment. Unica on the whole aims to help Telefónica to more readily serve the customers in the digitally orientated era with more efficient traditional services, as well as IoT/M2M, MVNO, private LTE, and Mobile-Edge Computing.

“Telefónica has been actively working for some time in the evolution of network virtualization technologies,” said Javier Gavilán, Planning and Technology Director at Telefónica. “Huawei is a reliable EPC vendor, and a strategic partner of Telefónica collaborating in many NFV areas.

“This large scale vEPC network deployment is a further step within the Telefónica UNICA virtualization program where a smooth migration to UNICA infra cloud capabilities will be reached following extensive test in Telefónica LAB. These results provide the confidence needed to continue with the adoption and deployment of virtualized solutions and to enable the transformation to software-driven networking,”

“Huawei is leading the All Cloud strategy for operators’ business success,” said Michael Ma, President of Cloud Core Network at Huawei. “This CloudEPC network build out represents a significant step forward in Telefónica’s cloud transformation roadmap and reinforces our long standing partnership as main EPC provider to Telefónica.”

OPNFV Danube release aims to cure the upstream blues

$
0
0

The Open Platform for NFV project has unveiled its fourth release, named Danube, that aims to work better with other open source projects and improve NFV testing.

Releases such as these are not for the faint-hearted, aiming as the do to address the almost infinitely complex software challenges and interdependencies required to virtualize network functions. The headline ingredients of this one include a dollop of improved testing at dash of functional support for MANO, and a sprinkling of DevOps magic.

“Danube represents an evolutionary turning point for OPNFV,” said Heather Kirksey, director at OPNFV. “It brings together full next-gen networking stacks in an open, collaborative environment. By harnessing work with upstream communities into an open, iterative testing and deployment domain, we’re delivering the capabilities that truly enable NFV, and that is very powerful.”

Here are those key ingredients as described in the OPNFV announcement.

Key enhancements available in OPNFV Danube include:

  • Foundational support and introduction of capabilities for MANO: Integration between NFV Infrastructure/Virtual Infrastructure Manager (NFVI/VIM) with Open-Orchestration (Open-O) platform (now ONAP); instrumentation of NFVI network telemetry to support Service Assurance and other use cases; multi-domain template support (Domino project); and translation features between YANG and Tosca modeling languages (Parser project).
  • Enhanced DevOps automation and testing methodologies bring a fully integrated CI/CD pipeline, the creation of Lab-as-a-Service (LaaS) to enable dynamic provisioning of lab resources, the introduction of stress testing into the OPNFV test suite, and a Common Dashboard that provides a consistent view of the testing ecosystem.
  • Focus on NFV performance including acceleration of the data plane via FD.io integration for all Layer 2 and Layer 3 forwarding (FastDataStacks project), and continued enhancements to OVS-DPDK and KVM. The release also sees a renewed focus on performance test project activities through virtual switch testing (VSPERF project), root cause analysis for platform performance issues (Bottlenecks projects), initial compute subsystem performance testing to lay the groundwork for Benchmarking As a Service (QTIP project), and storage subsystem performance testing (Storperf project).
  • Key NFV architectural enhancements, including the ability to dynamically enable and configure network control through integration with OpenStack Gluon and increased reliability and test cases that support multi-site and High Availability (HA) work.
  • Feature enrichment and hardening in core NFVI/VIM functionality such as IPv6, Service Function Chaining (SFC), L2 and L3 Virtual Private Network (VPN), fault management and analysis, and a continued commitment to support multiple hardware architectures, as well as traditional hardware OEMs, whitebox, and open source hardware through collaboration with the Open Compute Project.

OPNFV’s Tapio Tallgren has also blogged on the matter and you can find further analysis at Light Reading. As the summary above illustrates, NFV is incredibly complicated, which is one of the main reasons it’s taking so long to come into effect. Clearly development needs to happen in an open and collaborative environment, so it’s good to see regular progress from OPNFV.

OPNFV Danube infographic

Red Hat’s telecoms numbers illustrate the importance of ICT convergence

$
0
0

Open source software vendor company Red Hat revealed in its recent Q4/FY earnings that telecoms was its top vertical.

To find out why this sector has become so prominent for a company more associated with enterprise Linux and middleware Telecoms.com spoke to Darrell Jordan-Smith, VP of global telecoms and ICT at Red Hat. He explained that while software is of increasing strategic importance to CSPs, they don’t consider enterprise software development to be a core competence and thus need outside help.

“Many CSPs are looking to open source solutions to reduce costs via software but also to reap the promised benefits of innovation and the speed at which innovation can be delivered,” said Jordan-Smith. “Telcos don’t typically have large R&D budgets but with open source they can access developers in the software-defined world. They see open source as a strategic area.”

The software-ization of telecoms stuff, including things like NFV and SDN, is viewed as inevitable. But one thing CSPs are acutely aware of is how damn complicated it is – especially making sure all the various chunks of software all work with each other, which is where Red Hat comes in.

“This stuff isn’t easy, it’s complex,” said Jordan-Smith. “Having a partner ecosystem to support operators in stitching it all together in a scalable and predictable way is particularly important.  That’s what we also provide – we partner with the likes of Ericsson, Nokia, Huawei and Cisco to address the complex networking issues operators have today and where they can evolve in the future.”

The big kit vendors are all keen to help out their CSP customers with their emerging software needs – not just traditional areas like BSS/OSS but virtualization and all the cleverness that will be required to make 5G, IoT, etc work. Red Hat’s business model as an open source specialist is essentially to give its software away for free to the open source community and then use its expert position to provide services to users of that software.

“Where Red Hat fits is in building technologies upstream that support telcos as they move to the software-defined world,” said Jordan-Smith. “We are among the largest contributors of many of these upstream projects, such as OpenStack for cloud computing, KVM for virtualization, Kubernetes for containers, JBoss for middleware and software toolsets.

“Because we are upstream-first, we do not create proprietary versions of software, which means operators can choose the tech that most closely delivers against their needs at any given time, helping them be agile and flexible. And upstream-first means everyone has access to the technology and can download and use it.”

All of this has been Red Hat’s model for a while, but it seems to be in the right place at the right time regarding the needs CSPs have for partners to guide them through the increasingly convoluted software labyrinth. Red Hat has positioned itself as one of the companies that will do a lot of the software dirty work for them and this seems to be paying dividends, including its first $100 million deal with a CSP.

In his prepared remarks for the Q4 earnings, Red Hat CEO Jim Whitehurst said “Similar to Q4 last year, the top vertical for the quarter was telecom where we closed a number of new, large deals with several global telecom providers. Part of our overall investment strategy was to position our portfolio of technologies and expand our “go to market” capabilities to further address this market. Our Q4 wins clearly demonstrate the success of our efforts, including the approximately $100 million agreement that I noted a moment ago.”

The status of telecoms as the single biggest vertical for an enterprise Linux and middleware company is a great illustration of the convergence of telecoms and IT that defines the current technological era. Red Hat’s telecoms revenues not only provide an indication of how much is being invested in that transition, but may increasingly offer a barometer of how well that process is going.


Ericsson launches Dynamic Orchestration for all your virtualization needs

$
0
0

Ericsson says the launch of its new Dynamic Orchestration solution is also a strategic milestone in its quest to become a digital transformation player.

It looks like you get quite a few buzzwords for the price of one with this bit of shiny newness as it claims to do the following:

  • Supports end-to-end orchestration of services for hybrid networks
  • Responds to surrounding dynamics in real time
  • Allows zero-touch automation
  • Helps operators virtualize their networks at scale
  • Integrates and controls virtualization capabilities
  • Is therefore handy for things like 5G and IoT
  • Reduces time to market, increases agility, etc

These are bold claims. The move to virtualized and software-based networks has been protracted and fraught, with the light at the end of the tunnel still distant. Ericsson seems to be saying this solution can go a long way towards making the various components of this process play nice with each other, which would certainly be welcome if it delivers.

“The opportunities offered by virtualization are significant, but due to the complexity, many operators are taking an incremental step-by-step approach to get there,” said Ulf Ewaldsson, Head of Business Area Digital Services at Ericsson. “Ericsson Dynamic Orchestration enables our customers to excel at traditional services delivery while simultaneously incorporating virtualization capabilities to embrace emerging market and business opportunities driven by 5G and IoT.”

The rest of Ericsson’s spiel is the standard digital transformation narrative about how important it is that operators get with the times, improve their agility, move to the cloud and generally redesign themselves to operate less like stodgy old telcos and more like Silicon Valley startups. Considering its stated ambition this launch has come with surprisingly little fanfare, so perhaps the time has come for Ericsson to transform its own marketing operations.

Bewildering complexity of telecoms virtualization highlighted at TMF Live

$
0
0

On the opening morning of the TM Forum Live 2017 event a succession of operators and vendors lined up to give their take on how virtualization is progressing.

The ultimate conclusion was ‘it’s complicated’, as illustrated by prevalence of the all too familiar ‘loads of boxes inside bigger boxes’ software slides. Everyone agrees on what a great idea telecoms virtualization is, and the need for telcos to at least get closer to so-called OTTs in terms of speed, agility, etc, but how to get there seems more convoluted than ever.

A lot of this discussion occurred at the NFV & SDN keynote thread, where the difficulty of marrying top-down strategic objectives with bottom-up facts on the ground was lamented. Right now we still seem to be stuck in a relatively unstructured trial-and-error model, which is very time consuming and offers no guarantee of achieving the desired outcome. One speaker reflected on his presentation being not too dissimilar to one he made at this even seven years ago.

Adding to this complexity is the role of open source in the process. Most people agree that open source is a great model for enabling many disparate stakeholders to contribute to the collective effort. But the resulting flood of software is by definition not optimised for the individual needs of any individual stakeholder and thus still requires a degree of proprietary refinement.

Orange Poland reflected, for example, on the difficulty of installing and maintaining ONAP – Open Network Automation Platform, itself a hybrid of two other MANO platforms: ECOMP and Open-O. While the ultimate aim is to fine-tune ONAP in-house to suit Orange’s needs, the need to buy off-the-shelf enterprise software, initially at least, was conceded.

And if that wasn’t complex enough ONAP isn’t even the only open source orchestration initiative in this space, with ETSI-backed Open Source MANO also a major factor. The session focusing on orchestration only added to this sense of bewildering complexity with there being many different types of orchestrator, apparently creating the need for an orchestrator orchestrator. Where will it end?

HPE, which sponsored the NFV & SDN stream, reflected on the desirability of ‘intent-based service modelling’, in which telecoms software is so clever that it automatically calculates an implements the optimal path to any desired outcome. Again this is a lovely concept, but on the evidence of today’s presentations we still have a way to go before this utopia is reached.

Role reversal as Ericsson plays customer to vendor AT&T

$
0
0

In case you weren’t aware AT&T has its own line in on-demand network services and the latest customer is none other than giant networking vendor Ericsson.

The specific product is AT&T FlexWare and the client is Ericsson’s own global corporate network. The benefits being attributed to FlexWare are pretty much the standard ones associated with virtualization – speed, agility, lower opex, etc – but the remarkable part of this news is that it’s being provided by an operator to a networking vendor.

Since it became generally accepted that the telecoms industry needs to virtualize to survive, vendors like Ericsson have all been preoccupied with the business of providing the software and services needed by operators to take this journey. This has not happened at all quickly, thanks in part to the enormous complexity of the task, and announcements such as this reveal that many operators are increasingly taking matters into their own hands.

“We share with Ericsson a passion and vision for transformative and innovative technology,” said John Vladimir Slamecka, region president-Global Business-EMEA, AT&T. “AT&T FlexWare streamlines global network transformation, and helps protect the network investment against future changes. For today’s digital business world, that’s crucial; it helps makes innovation happen.”

While networking vendors are doubtless trying to remain significant in the virtualized era you can’t help feeling they do so with a heavy heart. For decades they profited from selling proprietary, hardware-led networking gear that locked operators into their products and maintenance contracts for entire generations of technology.

Now those same vendors are being pushed towards a software-based world where much of the development is happening in the open-source community, thus massively diluting their traditional unique selling points. Furthermore there are now plenty of new competitors for whom software is more of a core competence. You can read further analysis of this issue at Light Reading here.

Ericsson appoints lifer Erik Ekudden as Group CTO

$
0
0

Swedish kit vendor Ericsson has announced the promotion of Erik Ekudden, who has been with the company for 24 years, to Chief Technology Officer.

Ekudden (pictured) replaces Ulf Ewaldsson, who was given the task of heading-up Ericsson’s Digital Services business group in the recent cabinet reshuffle. His full job title is Group CTO and Head of Technology & Architecture, which seems to encompass Ericsson’s strategic approach to the evolution of the network as a platform and ultimately its role in mega-trends like 5G and IoT.

“Ericsson continues to move with speed in revitalizing our technology leadership,” said Ericsson CEO Börje Ekholm. “Erik has broad experience from the technology area, most recently from seven years in Silicon Valley. He is uniquely qualified to support customers and partners, prepare for the opportunities and challenges of the next wave of technology shifts around 5G, IoT, and digitalization.”

“With the development of 5G, we are building a platform for innovation that will create new business models for consumers and enterprises,” said Ekudden. “We see a fundamental shift in networks technology, which will be unprecedented in terms of speed and low latency. My mission will be to define the technology direction of Ericsson and to work closely with customers to design and operate those technologies in the most effective and efficient way. We will do this with a particular focus on the distributed cloud and evolution of mobile systems globally.”

Essentially Ekudden is in charge of keeping Ericsson relevant – no pressure then. As Ekholm indicated he has been based in Silicon Valley for the past seven years and this seems to be a big deal. That, combined with Ekudden’s talk of distributed cloud indicates a renewed focus on virtualization as a central pillar of Ericsson’s technology strategy for the foreseeable future.

Erik Ekudden Ericsson

Vodafone would like to see more urgency from vendors on NFV

$
0
0

We’re heading in the right direction on virtualized network functions, according to Vodafone, but vendors could do more to become ‘cloud-native’.

This was the view of Atul Purohit, Principal Enterprise Architect, Group Technonlogy at Vodafone, in an interview with Telecoms.com at the recent TM Forum Live event. One of Purohit’s main areas is VNF onboarding, which addresses how virtual network functions are integrated into the cloud.

“I think the VNF vendors have been a bit slow in adopting the cloud maturity model,” said Purohit. “They have been virtualizing their stuff but if you ask me whether any VNF provider has a cloud-ready or cloud-native VNF there answer would be no. The response we’re expecting from the vendors – to mature their offering in terms of cloud readiness and defining their software in a more standard way – that urgency is not yet fully realised.”

It’s one thing to virtualize a network function, it seems, but quite another to package it in such a way as is plays nice in the cloud. “Have they been virtualized? Yes. But have they been virtualized enough, with metamodels on top so they can be onboarded seamlessly? No,” said Purohit. “Just writing applications in software isn’t enough because we need a much more mature way of representing the software – for it to ultimately have cloud native status.

“Can we take a VNF from a provider and wrap it round with the right set of data on top of it to tell what the licensing mechanism, performance KPAs, etc will be? A last year’s event we signed an open API manifesto together with several other service providers because we all need to speak the same language. What’s happening is that each vendors is coming with their own way of expressing their stuff, but it may not be the same language we speak.”

The main purpose of the TM Forum is to promote collaboration within the telecoms industry, something that is apparently more needed than ever given the complexity of the move to virtualization. Vodafone was participating in several ‘catalysts’ at the event, which are designed to showcase specific examples of successful collaboration.

This isn’t the first time Vodafone has expressed its concerns about the rate of progress of VNF onboarding. A year ago Vodafone’s head of SDN and NFV expressed similar concerns to Light Reading, so it looks like vendors may still not have got the memo. We asked whether, since virtualization challenges many traditional vendor commercial models, they might be deliberately dragging their feet on this stuff.

“It’s a tough problem to crack – it’s not as simple as just hiring a lot of software developers,” said Purohit. “Everyone is moving in the right direction but the pace is still a question. And the kind of leadership we should see in the urgency to move from a box to a cloud-native ecosystem is probably a bit less than we would like.”

The rate of progress of telecoms virtualization was a recurring theme at the TM Forum Live event. Such is the complexity of the project that it currently seems to defy things like standardization, which seemed to be something Purohit would like to have seen more of by now. But whatever the reason is behind it, vendors should seriously look into the reasons behind this perceived lack of urgency on their part.

AT&T boosts in-house virtualization competence with Vyatta acquisition

$
0
0

Giant US operator AT&T has continued its quest for self-reliance in the virtualization era with the acquisition of Vyatta Software from Brocade.

The two things AT&T seems to covet most within Vyatta are its SD-WAN and white box capabilities, with both being components SDN. It will also acquire the Vyatta network operating system, which includes some existing virtualized network functions and, it seems, some tools to help make new ones.

This acquisition is symptomatic of AT&T’s desire to take matters into its own hands as the virtualization era progresses. In a recent interview with Telecoms.com a senior software architect at Vodafone revealed his company’s frustration at the rate of progress from vendors in this area and in a symbolic move networking vendor Ericsson actually purchased some virtualization goodness from AT&T.

“Our network transformation effort lets us add new features quicker than ever before at a much lower cost,” said Andre Fuetsch, CTO and President of AT&T Labs. “Being able to design and build the tools we need to enable that transformation is a win for us and for our customers.”

One of the reasons Vyatta was available is that Brocade itself is in the process of being acquired by Broadcom, which is mainly interested in it enterprise storage connectivity capabilities. As a consequence Brocade has been selling off its IP networking bits and bobs, including Ruckus to Arris soon after the Broadcom deal was announced, and this is the latest disposal. You can read further analysis of this move on Light Reading here.

SK Telecom has a crack at the interoperability puzzle

$
0
0

SK Telecom has launched T-MANO, an NFV MANO platform, its own take on the all-important business of orchestrating the virtualized telecoms world.

T-MANO has been optimized to SK’s network using specifications put forward by the ETSI NFV Industry Specification Group, putting a mark in the win column for the standardization cheerleaders. Virtualization has seen a bit of a faltering roadmap over the last couple of years due to the fact it is actually quite complicated, but SK is showing the world that interoperability is possible.

And it won’t just be the bods at SK Telecom to benefit from the MANO juice, as the team plan to open up the APIs of T-MANO so that anyone can use it to build virtualized network equipment or software.

“With the commercialization of T-MANO, SK Telecom secures the basis for accelerating the application of NFV technologies to provide better services for customers,” said Choi Seung-won, Head of Infrastructure Strategy Office at SK Telecom. “We will continue to develop NFV technologies and accumulate operational knowhow for virtualized networks to thoroughly prepare for the upcoming era of 5G.”

It’s a step in the right direction for the standardization cheerleaders, as SK state the platform allows the management of service quality and data traffic can now be done in an integrated manner regardless of equipment manufacturer. Prior to the launch of T-MANO, the team had to develop, build and operate a separate NFV management platform for each different network equipment, due to the different specifications. It was a time consuming, and potentially, expensive activity, perhaps explaining the staggered progress of virtualization.

The first area to get a bit of the T-MANO juice will be its virtualized VoLTE routers, though it will be expanded to virtualized LTE EPC before too long, and then onto the MMS Server. According to SK Telecom, in 2017, virtualized EPC will take up around 80% of newly deployed EPC, before deploying nothing by virtualized EPC by 2019.

For a more in-depth view on progress at SK Telecom, check out Iain Morris’ piece over on sister-site Light Reading.

 


Vodafone explores software-defined radio with a squeeze of Lime

$
0
0

Vodafone group is enlisting the help of UK outfit Lime Microsystems to develop software-defined radio platform that supports its open RAN ambitions.

Lime aims to commoditize the RAN such that it is open to all developers in much the same way has the PC platform has been for years. As the virtualization era progresses it has become clear that the old paradigm of closed, proprietary networking technology is over and a much more open environment is required. Lime thinks that software-defining the RAN is a key part of this.

Vodafone has been one of the most vocal operators in demanding the industry gets a move on with virtualization and all the utopian goodness it promises the industry. It is also one of the operators showing a desire to become less dependent on the big networking vendors and do more of the R&D heavy lifting itself, as indicated by its recent tech day.

“Lime Micro is at the forefront of software defined radio wireless technology development, and the platform being app-enabled brings the concepts of agile and feature-rich systems together, unlocking new applications that leverage this radio flexibility and openness to build new services and a completely different radio,” said Francisco Martin, Head of Radio Product at Vodafone.

“LimeNET is the next phase in virtualizing wireless networks and bringing products that operators can use for future real-world deployments,” said Ebrahim Bushehri, CEO of Lime. “The radio was limited before in terms of the flexibility of providing various frequencies and modes, but we’ve solved that with our field programmable radio that can adapt dynamically to multiple bands.

“Wireless innovation has been limited by access to affordable, easy-to-use, maintainable and upgradeable hardware. By making radio networks software configurable, LimeNET is changing this and is aligning well with Vodafone’s Open RAN initiative to virtualize RAN functionality and enable decoupling of hardware, software and third party applications using general purpose platforms.”

In an interview with Telecoms.com Bushehri revealed that the key to this technology is software-defining the RAN and then using generic chips from vendor such as Intel rather than proprietary ones. This brings the whole coding community into play and partnerships with the likes of Canonical enable that coding to be done on Linux.

Ultimately Lime considers its work to be well positioned for a range of emerging technological trends in the telecoms world. Not only does software-defining the RAN fit in well with virtualization, but its field-programmable modules are considered to be a good fit with mobile-edge computing.

Nokia Bell Labs shows off its big new aaS

$
0
0

Nokia Bell Labs has shocked the world with the invention of another aaS, this time targeting the big, wide world of 5G.

Putting together a consortium of players throughout Europe, the new aaS rolls right off the tongue, standing for Next Generation Platform-as-a-Service. The consortium is part of the European Commission’s 5G Infrastructure Public-Private Partnership (5G-PPP), and features organizations such as BT, Orange, ATOS and the University of Milano-Bicocca.

“The consortium’s ambition for developing a next generation PaaS is to enable developers to collaborate within the 5G ecosystem (operator, vendor, third party) in order to ignite new businesses; thereby increasing market scale and improving market economics,” said Bessem Research Manager for Nokia Bell Labs, and Project Leader for the consortium.

The consortium is relatively logical, as the majority of the 5G revolution will take place in the cloud. Following the shift to the cloud, such platform-as-a-Service models will become much more common.

It’s also another small step towards virtualization, but has had a troubled road to date. Some corners of the industry might complain it has been taking too long, but when you look at it logically there is a very good reason for this delay; virtualization is really difficult.

The platform itself will aim to have two features. Firstly, a 5G cloud-native platform must facilitate building, shipping and running VNF applications with ‘telco-grade’ quality. And secondly, it must be open to allow the combination all sorts of third-party applications with those VNFs.

NEC/Netcracker, Red Hat, Juniper and Dell EMC combine for Malaysian virtualization platform

$
0
0

NEC and Netcracker have announced a new tie up with Red Hat, Juniper Networks and Dell EMC to offer an end-to-end multivendor 5G-ready virtualization platform in Malaysia.

The new collaboration will see NEC and Netcracker position itself as a full SDN/NFV solution provider, offering services to both ISPs and enterprise customers. The pair claim the proposition will allow them to design and deploy a network architecture concepts, able to virtualize entire classes of network node functions into building blocks that may connect or chain together to create communication services.

“Around the world we are seeing service providers in the US, South Korea, Sweden, Estonia, Turkey, Japan and China upgrading their network infrastructure in preparation to offer 5G communications services which are imminent,” said Chong Kai Wooi, Managing Director, NEC Corporation of Malaysia

“Commercializing such services, including the massive connectivity of people, transportation, objects and cities, is expected to take off in the next two to three years.”

The new collaboration claims that time to market for new services will be drastically reduced with the platform. While it is believed service providers and enterprises need six to twelve months to introduce a new service, with the Ecosystem 2.0 Program, this can be reduced by up to 70%.

In terms of the other players in the tie up, Red Hat will be offering its Infrastructure-as-a-Service (IaaS) solution, Juniper’s NFV networking services platform will be used to integrate physical and virtual elements, while Dell EMC will also contribute on the NFV front with its PowerEdge platform.

“As the industry moves quickly towards 5G technology, getting the management and orchestration environment right is critical to enable new IoT use cases requiring dynamic network slicing,” said Aloke Tusnial, CTO of SDN/NFV at Netcracker. “This is a key focus for us at Netcracker and we are delighted to be part of this strong collaboration to bring 5G virtualization to market faster.”

In the telco world, NEC might well be a company to keep an eye on. While operators only account for 50% of the company’s revenues each year, the team has big ambitions to increase this year-on-year, with the SDN/NFV portfolio leading the charge.

“As service providers and 5G technology services take center stage in the near future, we foresee our SDN/NFV solution to contribute more than 10% to our carrier solutions revenues per year within the next three years,” said Chong.

Huawei looks to the power of slicing and dicing

$
0
0

Speaking at the New-Generation Internet Infrastructure Forum in Beijing, Huawei has announced a new innovation project looking at 5G power slicing technologies.

Working alongside China Telecom Beijing Research Institute and China Electric Power Research Institute, Huawei will investigate how slicing technologies can benefit the power industry, such as automatic power distribution.

While the general benefits of 5G have been loudly and proudly proclaimed by all, over the next couple of years you can expect some much more vertical specific ideas. In this instance, the trio will try to prove a 5G network slice can achieve security and isolation at the same level as those provided by a private power grid, but at a much lower cost and leaving wiggle room for the smart grid.

“5G slicing provides differentiated capabilities for diverse requirements of innovative industry applications,” said Zhu Xuetian, Director of Network Technology & Planning Department of China Telecom Beijing Research Institute.

“The three-party collaboration project is the first exploration of 5G slicing in power industry applications. 5G slicing is applied to vertical industries, such as the electric power industry, and this will incubate more new applications and business models.”

“The core network is critical to implement 5G slicing features, such as on-demand network definition, fast deployment, automatic operation, end-to-end SLA assurance, and capability exposure,” said Qiu Xuefeng, VP of Packet Core Network, Huawei Cloud Core Network Product Line.

“Huawei Cloud Core Network started early to invest in R&D of 5G network architecture evolution and slicing technology and has many leading achievements. This project will enable end-to-end technical verification of 5G slicing in smart grid industry applications and accelerate development of sophisticated technologies and solutions.”

Elsewhere in the Huawei world, the team has released a new idea just to remind us it hasn’t forgotten about the SDN/NFV euphoria.

TestCraft is designed for SDN/NFV testing, providing a variety of automated test models and professional services to make sure you’ve nailed that virtualization idea. In short, Huawei is saying there are so many services and functions which the operators need to test, why not let them do it for you. It’s considerate of them if anything. Light Reading has more on this here.

Huawei Testing

Three makes moves in the telco cloud space

$
0
0

Mycom OSI has announced it has been selected to assure Three UK’s next generation core network which deploys NFV and SDN, as part of what it claims is the world’s first Telco Cloud.

As demand for VoLTE, high definition video and other digital services continue to grow, Three will deploy a new cloud native core network, which it says will enable massive scalability, elasticity, and better reliability for customers. It’s another incremental step towards IoT and 5G, providing a mid-term solution to increase speed in response to customers’ dynamic service demands.

“We are excited and privileged to be selected by Three UK for the world’s foremost network virtualization project. While others are debating various approaches and standards, Three has designed a leading architecture, selected leading partners and is now leading its peer group in deploying Telco Cloud,” said Mycom OSI President, Mounir Ladki.

“Mycom OSI’s Assurance suite will enable Three to deliver market-leading customer experience, agility, scale and reliability whilst embracing exciting new opportunities with digital services, IoT and 5G.”

As part of the agreement, Mycom OSI’s Experience Assurance and Analytics suite will be deployed to monitor Three UK’s Telco Cloud. The suite will assure both new virtualized and existing physical networks, and provide closed-loop assurance-driven orchestration based on end-to-end network and service quality.

Elsewhere in Three’s telco cloud world, Astellia has been brought into the fold to help the team transform the network to a virtualized and software-based architecture. The work will focus on providing the visibility and the capability to improve network performance and customer experience.

“Our customers are at the heart of everything we do. We were the first UK network to introduce all you can eat data and we let our customers roam abroad at no extra cost in 60 destinations,” said Adam O’Keeffe, Head of OSS Transformation at Three UK.

“Astellia’s technology will help build upon our already excellent customer experience by deploying the capability to monitor the performance of services and customer experience on our new virtualized technology.”

Using Astellia’s vProbes-based virtual monitoring solution, the team will monitor traffic within the virtual infrastructure of Three’s network, raise alarms for performance degradation and troubleshoot issues. Astellia will provide various network, subscriber and service analytics with the ambition of improving the customer’s quality of experience.

Viewing all 84 articles
Browse latest View live




Latest Images